Mid Level Developer (Databricks)
POSITION: Mid Level Developer (Databricks)
LOCATION: Seffner, FL
If local to Tampa, 3 days a week onsite
If not Local (look for people in FL at least) need to be onsite 1x/week
DURATION
6+ months CTH
be able to convert without sponsorship
REQUIRED SKILLS
Must have a Databricks Certification (Advanced Data Engineer Certification or Equivalent Fabric Certification)
The ideal candidate will have experience in data engineering, an understanding of machine learning workflows, and knowledge of modern data architectures.
NOTES:
If the candidate does not have the Databricks cert, but is qualified, they must agree to get the cert within two months of being on assignment.
Responsibilities:
Design, develop, and maintain scalable data pipelines and workflows usingDatabricks.
Integrate and process structured and unstructured data usingDelta Live Tables, Streaming Data Processing.
Support and optimize machine learning workflows by preparing and managing training and inference datasets.
Collaborate with data scientists and AI teams to supportAI/BI Genieintegrations.
ImplementSnowflakeandStar Schemaarchitectures to optimize data storage and query performance.
Work on BI reporting pipelines to ensure compatibility with advanced analytics tools.
Develop and maintain documentation of data processes and architectures.
Optimize Spark jobs and workflows for performance and cost-efficiency.
Troubleshoot and resolve data quality and pipeline issues.
Stay updated on advancements in data engineering, machine learning, and AI technologies.
Requirements:
Databricks Certification(e.g., Databricks Certified Professional Data Engineer).
3 7 years of experience in data engineering roles.
Proficiency inPython,SQL, andApache Spark.
Hands-on experience withDelta Live Tablesand big data processing frameworks.
Understanding machine learning models, pipelines, and AI tools.
POSITION: Mid Level Developer (Databricks)
LOCATION: Seffner, FL
If local to Tampa, 3 days a week onsite
If not Local (look for people in FL at least) need to be onsite 1x/week
DURATION
6+ months CTH
be able to convert without sponsorship
REQUIRED SKILLS
Must have a Databricks Certification (Advanced Data Engineer Certification or Equivalent Fabric Certification)
The ideal candidate will have experience in data engineering, an understanding of machine learning workflows, and knowledge of modern data architectures.
NOTES:
If the candidate does not have the Databricks cert, but is qualified, they must agree to get the cert within two months of being on assignment.
Responsibilities:
Design, develop, and maintain scalable data pipelines and workflows usingDatabricks.
Integrate and process structured and unstructured data usingDelta Live Tables, Streaming Data Processing.
Support and optimize machine learning workflows by preparing and managing training and inference datasets.
Collaborate with data scientists and AI teams to supportAI/BI Genieintegrations.
ImplementSnowflakeandStar Schemaarchitectures to optimize data storage and query performance.
Work on BI reporting pipelines to ensure compatibility with advanced analytics tools.
Develop and maintain documentation of data processes and architectures.
Optimize Spark jobs and workflows for performance and cost-efficiency.
Troubleshoot and resolve data quality and pipeline issues.
Stay updated on advancements in data engineering, machine learning, and AI technologies.
Requirements:
Databricks Certification(e.g., Databricks Certified Professional Data Engineer).
3 7 years of experience in data engineering roles.
Proficiency inPython,SQL, andApache Spark.
Hands-on experience withDelta Live Tablesand big data processing frameworks.
Understanding machine learning models, pipelines, and AI tools.