Who We Are Looking For:
- Design, build, and maintain scalable data pipelines to support business analytics and data science activities.
- Utilize proper judgment in identifying data-related issues and escalate to specialized teams when necessary.
- Work with cross-functional stakeholders to gather requirements and translate business needs into technical data solutions.
- Manage data ingestion from multiple sources including APIs, databases, and cloud platforms.
- Ensure data integrity, quality, and governance while following organizational policies and compliance standards.
- Perform data modeling, ETL/ELT pipeline development, and data warehouse optimization.
- Monitor and troubleshoot performance issues with data systems and implement necessary improvements.
- Support storage and processing of large-scale datasets using cloud services and distributed systems.
- Maintain documentation for data workflows and provide guidance to data consumers across the organization.
- Stay updated with emerging data engineering technologies and contribute to continuous improvements.
Skills:
- Data Engineering
- ETL/ELT Development
- SQL & NoSQL Databases
- Data Modeling
- Python / Scala
- Data Pipeline Automation
- Cloud Data Platforms (AWS / Azure / GCP)
- Big Data Tools (Spark, Hadoop, Kafka)
- API Integration
- Linux / Shell Scripting
- CI/CD, Git, Docker
- Monitoring & Troubleshooting
- Snowflake / Redshift / BigQuery (Preferred).
- Data Governance & Security
Top Skills Details:
- Data pipelines, SQL, Python, ETL, Data warehouse, AWS/Azure/GCP, Spark, Kafka, Database administration, Performance optimization.
Additional Skills & Qualifications:
- Strong analytical thinking and problem-solving skills.
- Willingness to learn new technologies rapidly.
- Ability to handle multiple tasks in a fast-paced environment.
- Excellent documentation and collaboration skills.
Experience Level: Intermediate Level.