Egen is a data engineering and cloud modernization firm helping industry-leading companies achieve digital breakthroughs and deliver for the future, today. We are catalysts for change who create digital breakthroughs at warp speed. Our team of cloud and data engineering experts are trusted by top clients in pursuit of the extraordinary. An Inc. 5000 Fastest Growing Company 7 times, and recently recognized on the Crain’s Chicago Business Fast 50 list, Egen has also been recognized as a Great Place to Work 4 times.
You will join a team of insatiably curious data engineers, software architects, and product experts who never settle for "good enough". Our Data Engineering teams build scalable data platforms and fault-tolerant pipelines for modern analytics and AI services using Python, SQL, Linux, Bash, and AWS, GCP, or Azure data storage and warehousing services.
As a Senior Data Engineer, you will be the subject matter expert in building event-driven data pipelines, supporting & solving data integration challenges, and cloud-native data warehouse and data lake processing.
Responsibilities:
- Lead and develop passionate data engineering teams to design and develop distributed and event-driven data pipelines with cloud-native data stores such as Snowflake, Redshift, Big Query, or ADW.
- Design, document, and lead development of end-to-end data product pipelines
- Consult business, product, and data science teams to understand end-user requirements or analytics needs to implement the most appropriate data platform technology and scalable data engineering practices.
- Prepare data mapping, data flow, production support, and pipeline documentation for all projects.
- Lead data engineering and/or operations teams in delivering completeness of source system data by performing a profiling analysis and triaging issues reported in production systems.
- Facilitate fast and efficient data migrations through a deep understanding of design, mapping, implementation, management, and support of distributed data pipelines.
What we're looking for:
- Minimum of Bachelor’s Degree or its equivalent in Computer Science, Computer Information Systems, Information Technology and Management, Electrical Engineering or a related field.
- You have productionized real-time data pipelines through EDA leveraging Kafka or a similar service.
- You know what it takes to build and run resilient data pipelines in production and have experience implementing ETL/ELT to load a multi-terabyte enterprise data warehouse.
- You have a strong understanding and exposure to data mesh principles in building modern data-driven products and platforms
- You have a strong background in distributed data warehousing with Snowflake, Redshift, Big Query, and/or Azure Data Warehouse.
- You have expert programming/scripting knowledge in building and managing ETL pipelines using SQL, Python, and Bash.
- You have implemented analytics applications using multiple database technologies, such as relational, multidimensional (OLAP), key-value, document, or graph.