We are looking for a talented Data Engineer to join our team and help design, build, and maintain robust data pipelines and systems. In this role, you will play a critical part in enabling data-driven decision-making by ensuring that data is collected, processed, and made accessible efficiently and securely across the organization.
Design, develop, and maintain scalable data pipelines and ETL (Extract, Transform, Load) processes.
Build and optimize data architectures for data ingestion, transformation, and storage.
Collaborate with data scientists, analysts, and software engineers to deliver clean and reliable datasets.
Develop and maintain data models, data warehouses, and data lakes.
Ensure data quality, integrity, and governance across all data pipelines.
Monitor and improve the performance, scalability, and reliability of data systems.
Implement best practices in data management, security, and compliance.
Troubleshoot and resolve data-related issues and support data infrastructure.
Bachelor’s or Master’s degree in Computer Science, Engineering, Information Systems, or a related field.
3+ years of experience as a Data Engineer or in a similar role.
Strong programming skills in Python, Java, or Scala.
Hands-on experience with ETL tools and frameworks (e.g., Apache Airflow, Apache NiFi, Talend).
Proficiency in SQL and working with relational databases (e.g., PostgreSQL, MySQL) and NoSQL databases (e.g., MongoDB, Cassandra).
Experience with big data technologies such as Hadoop, Spark, Kafka, or Hive.
Familiarity with cloud platforms (AWS, Azure, or GCP) and cloud data services (Redshift, BigQuery, Snowflake).
Strong understanding of data modeling, warehousing concepts, and data governance.
Excellent problem-solving and communication skills.
