ESSENTIAL SKILLS:
Experience designing, building, and optimizing scalable data pipelines and data models using big data
frameworks (e.g., Apache Spark, Flink, or equivalents).
Proficient programming skills in Python or similar languages for data processing and automation.
Exposure to cloud data platforms and object storage solutions (e.g., Azure, AWS, Google Cloud) for enterprise data engineering.
Understanding of data governance, data quality, lineage, and compliance principles.
Experience or familiarity with CI/CD pipelines and orchestration tools for data workflows (e.g., GitHub Actions, Jenkins, or cloud-native tools).
Good analytical and problem-solving skills with attention to detail.
Ability to collaborate effectively within cross-functional and distributed teams.
ADVANTAGEOUS SKILLS:
Working knowledge of cloud services and serverless architectures across Azure, AWS, or Google Cloud.
Experience with monitoring, logging, and data exploration tools (e.g., Splunk, Azure Data Explorer, ELK stack) is beneficial.
Exposure to streaming platforms or messaging systems (e.g., Kafka, MQTT, RabbitMQ) is an advantage.
Basic understanding of frontend frameworks (e.g., React, Vue.js) is a plus.
Experience supporting users and managing tickets is beneficial.
Solution-oriented mindset with strong communication and teamwork skills.
Ability to understand business requirements and translate them into technical tasks.
Willingness to engage with international customers and navigate language or cultural differences.
Self-motivated, flexible, and ready to learn and take on diverse tasks.
Willingness to travel internationally (up to 2 weeks at a time).
Agile methodology experience and ITIL process knowledge are advantageous.
Basic German language skills are a plus but not required.