Állás részletei
-
Cég neve
Grafton Recruitment
-
Munkavégzés helye
Hibrid • Budapest -
Munkaidő, foglalkoztatás jellege
- Teljes munkaidő
- Általános munkarend
-
Elvárt technológiák
- APACHE SPARK DATABASES NOSQL CLOUD TESTING JAVA PYTHON SQL SECURITY AZURE ACCESS MACHINE LEARNING
-
Elvárások
- Angol felsőfok és
- Magyar anyanyelvi szint
- Pályakezdő/friss diplomás
- Egyetem
Állás leírása
Responsibilities
In this role, your key tasks will include:
- Designing, developing, and maintaining scalable data pipelines and workflows for large-scale data processing
- Replacing legacy data warehouse systems with modern data lake solutions
- Architecting robust data platforms leveraging structured and unstructured data
- Ensuring data quality, reliability, and scalability across multiple business domains (Safety, Security, Crisis Management)
- Optimizing query performance in BigQuery and other SQL/noSQL environments
- Developing and maintaining conceptual, logical, and physical data models
- Collaborating with cross-functional teams to prioritize data needs and deliver actionable insights via dashboards
- Building and maintaining ETL processes and event streaming solutions (Kafka, Spark Streaming)
- Overseeing seamless operation of data solutions and preparing for cloud migration to GCP
Requirements
You are the perfect candidate if you:
- Have 5-10 years of experience building production-grade data processing systems as a Data Engineer
- Hold a degree in Computer Science, Software Engineering, or equivalent (or comparable work experience)
- Have strong expertise in Google Cloud Platform (GCP) - especially BigQuery, pipeline creation, SQL/noSQL
- Are proficient in Python (preferred), with additional experience in Java or Scala a plus
- Have experience with databases, SQL, and data modelling (conceptual, logical, physical)
- Understand common algorithms and data structures
- Have knowledge of CI/CD techniques and automated testing
- Are fluent in English - both written and spoken
- Possess coaching experience or willingness to mentor junior colleagues
It's a plus if you have:
- Hands-on experience with Azure or the Hadoop ecosystem
- Experience with Apache Spark (including Spark Streaming)
- Knowledge of columnar storage solutions (e.g., Apache HBase)
- Affinity with Machine Learning and/or Operations Research concepts
What we offer
What you'll get in return:
- Competitive salary package with cafeteria benefits
- Hybrid working conditions (Budapest office presence 1 day/week)
- Access to professional training and development opportunities
- Home office allowance
- Trainings for soft/hard skills, language courses
- Discounts on airplane tickets
- Annual bonus
Company info
Our partner is a well-established international airline with a long-standing reputation for reliability and innovation in air travel. With a strong European presence and global connectivity, they prioritize sustainability, customer experience, and operational excellence. Their IT and digital teams play a key role in shaping the future of aviation through cutting-edge solutions and collaborative development.
They are looking for a Senior Data Engineer (Python, SQL, GCP) to join their growing Budapest based Data team.
How to apply
Állás, munka területe(i)
Álláshirdetés jelentése