Állás részletei
-
Cég neve
High Tech Engineering Center Kft.
-
Munkavégzés helye
Országos lefedettség -
Munkaidő, foglalkoztatás jellege
- Alkalmazotti jogviszony
- Általános munkarend
-
Elvárt technológiák
- ANALYTICS PYTHON SQL AWS AZURE CLOUD SPARK DOCKER DATABASES APACHE BI UNIX VIRTUALIZATION
-
Elvárások
- Angol középfok
- 5-10 év tapasztalat
- Középiskola
Állás elmentve
A hirdetést eltávolítottuk a mentett állásai közül.
Állás leírása
Responsibilities
Take ownership of data engineering features, architecture, and code quality
Design, implement, and maintain Databricks-based data pipelines and workflows
Build and optimize ETL/ELT processes using Apache Spark on Databricks
Design and manage data lakes and Lakehouse architectures (Delta Lake)
Integrate diverse data sources and ensure reliable data ingestion
Automate orchestration, scheduling, and monitoring of Databricks jobs
Design and implement fault-tolerant and scalable data processing workflows
Ensure high data quality, consistency, and accuracy across the platform
Make informed decisions about storage, compute, and performance optimization
Collaborate with analytics, BI, and business stakeholders to support data-driven products
Design, implement, and maintain Databricks-based data pipelines and workflows
Build and optimize ETL/ELT processes using Apache Spark on Databricks
Design and manage data lakes and Lakehouse architectures (Delta Lake)
Integrate diverse data sources and ensure reliable data ingestion
Automate orchestration, scheduling, and monitoring of Databricks jobs
Design and implement fault-tolerant and scalable data processing workflows
Ensure high data quality, consistency, and accuracy across the platform
Make informed decisions about storage, compute, and performance optimization
Collaborate with analytics, BI, and business stakeholders to support data-driven products
Requirements
7+ years of relevant experience as a Data Engineer
Strong hands-on experience with Databricks and Apache Spark
Proficiency in Python or Scala (both strongly preferred)
Very good knowledge of SQL, relational databases, and data warehousing concepts
Solid experience with ETL/ELT principles and data pipeline design
Hands-on experience with cloud platforms (Azure, AWS, or GCP), preferably Databricks workloads
Experience working with distributed systems and large-scale data processing
Familiarity with Unix-like operating systems
Experience with version control systems
Strong communication skills and English language proficiency
Strong hands-on experience with Databricks and Apache Spark
Proficiency in Python or Scala (both strongly preferred)
Very good knowledge of SQL, relational databases, and data warehousing concepts
Solid experience with ETL/ELT principles and data pipeline design
Hands-on experience with cloud platforms (Azure, AWS, or GCP), preferably Databricks workloads
Experience working with distributed systems and large-scale data processing
Familiarity with Unix-like operating systems
Experience with version control systems
Strong communication skills and English language proficiency
Nice-to-have
Databricks certifications - Professional level, Experience with Delta Lake, performance tuning, and cost optimization, Experience with streaming technologies (Kafka or similar), Knowledge of workflow orchestration tools (Databricks Workflows, Airflow, etc.), Experience with cloud-native and serverless data architectures, Familiarity with containerization and virtualization (Docker, Kubernetes), Experience building data assets that directly support analytics and business decision-making
How to apply
You can submit your application on the company's website, which you can access by clicking the „Apply on company page“ button.
Állás, munka területe(i)
Álláshirdetés jelentése