





Advansys is a dynamic solutions provider focused on delivering smart, modular, and sustainable technology solutions that enhance operations, improve customer experiences, and drive business modernization. With over 400 skilled engineers, we serve 100\+ enterprise customers across 14 countries. Specialized in a wide array of premium services including Business Automation, Industrial Digitization, Low code Development, Cloud Services, Warehouse Automation \& Strategic Outsourcing. Founded in 2014, Advansys is part of the INTRO Group, a private conglomerate established in 1980 with diverse investments across different business areas, oil and gas, real estate, specialized engineering, financial investment, Food \& manufacturing. We are seeking an experienced and highly skilled RAID Developer to work on complex data\-related projects. The ideal candidate should have a strong background in SQL \& PL/SQL, experience with Hive DB, hands\-on knowledge of Apache Spark, and proficiency in Python programming. You will be responsible for developing, optimizing, and maintaining robust data pipelines, implementing high\-performance SQL queries, and integrating data systems to ensure reliable and scalable solutions. **Job description:** * Develop and maintain SQL\-based solutions, including complex queries and PL/SQL procedures for efficient data handling and reporting. * Design and implement scalable data processing systems using Apache Spark and Hive DB to process large\-scale data efficiently. * Build and optimize data pipelines and ETL processes using Python for automation, data extraction, transformation, and loading. * Work with large datasets, ensuring efficient querying and manipulation through Spark and Hive, providing seamless integration across various data sources. * Collaborate with data scientists, analysts, and engineers to understand business requirements and translate them into technical solutions. * Perform troubleshooting and optimization of SQL queries, PL/SQL stored procedures, and Spark jobs for improved performance. * Monitor and ensure the performance and scalability of data systems, implementing proactive measures to handle data growth. * Ensure adherence to best practices in code quality, security, and testing for data systems. * Contribute to the design and architecture of data pipelines that support business intelligence and reporting applications. **Requirements:** * Familiarity with additional big data technologies (e.g., Hadoop, Kafka, etc.). * Experience with Linux. * Knowledge of data warehousing concepts and architecture. * Strong communication skills to effectively interact with cross\-functional teams and stakeholders. To Apply Please submit your resume!


