Remote Data Engineer In Mn
About the Role
Insight Global is seeking Sr. Data Engineers to join a Personal Campaign Operations team. This is a known, visible and well-funded team bringing in millions of dollars for the company. This team supports the entire enterprise (care delivery and health plan marketing), and works in a collaborative, cross-functional environment with both developers and campaign owners. This person must be flexible as they will be working with different people on different campaigns at one time, and have to pivot according to changing priorities. This person will overall be responsible to:
- Design, develop, and implement end-to-end data solutions using Azure Databricks.
- Convert current SQL to Python code in Databricks.
- Modify and maintain data pipelines. Write, test, and optimize PySpark and SQL scripts to transform and load high volumes of structured data. Update or maintain existing data pipelines in a production setting.
- Ensure data quality and integrity by implementing data validation and cleansing processes.
- Demonstrate strong verbal communication and critical thinking skills, working well within a team environment and not being afraid to speak up.
We are a company committed to creating diverse and inclusive environments where people can bring their full, authentic selves to work every day. We are an equal opportunity/affirmative action employer that believes everyone matters. Qualified candidates will receive consideration for employment regardless of their race, color, ethnicity, religion, sex (including pregnancy), sexual orientation, gender identity and expression, marital status, national origin, ancestry, genetic factors, age, disability, protected veteran status, military or uniformed service member status, or any other status or characteristic protected by applicable laws, regulations, and ordinances.
Responsibilities
- Design, develop, and implement end-to-end data solutions using Azure Databricks
- Modify and maintain data pipelines
- Write, test, and optimize PySpark and SQL scripts to transform and load high volumes of structured data
Qualifications
- 4+ years of experience in data engineering
- proficiency in Azure Databricks, PySpark, SQL
- experience with data quality and pipeline production support
Required Skills
Keywords
Interested in this role?
Apply now and take the next step in your career.
