Job Description:
Data is the fuel that powers intelligent automation. We’re looking for a Data Engineer to build the pipelines, warehouses, and infrastructure that enable our clients to turn raw data into actionable insights and AI-powered workflows.
As a Data Engineer at AutomateNexus, you’ll design and implement data solutions for SMB clients—connecting disparate data sources, building reliable ETL pipelines, and creating the data foundations that power dashboards, analytics, and machine learning. You’ll work across a variety of tech stacks and industries, helping businesses unlock the value hidden in their data.
This role is perfect for someone who loves building robust data systems and sees the direct impact of their work. You’re comfortable working with clients to understand their data landscape and translating requirements into scalable, maintainable solutions.
Responsibilities:
Data Pipeline Development
- Design and build ETL/ELT pipelines to extract, transform, and load data from diverse sources
- Integrate data from SaaS applications, databases, APIs, and file systems
- Implement data quality checks, validation, and error handling
- Optimize pipelines for performance, reliability, and cost
Data Infrastructure
- Design and maintain data warehouses and data lakes for client analytics
- Configure and manage cloud data platforms (Snowflake, BigQuery, Redshift, Databricks)
- Implement data modeling best practices for analytics and reporting use cases
- Set up monitoring, alerting, and documentation for data systems
Analytics & Integration
- Connect data infrastructure to BI tools (Power BI, Tableau, Looker) for client reporting
- Build data foundations that support AI/ML initiatives and automation systems
- Integrate data pipelines with automation platforms (Make, n8n, Zapier)
- Create data APIs and services for real-time data access
Client Delivery
- Work with clients to understand data sources, requirements, and use cases
- Translate business needs into technical data solutions
- Manage data projects from discovery through deployment
- Provide documentation, training, and ongoing support
Preferred Qualifications:
- 3+ years experience in data engineering or related roles
- Strong SQL skills and experience with relational databases
- Proficiency with Python and data processing libraries (Pandas, PySpark)
- Experience with cloud data platforms (Snowflake, BigQuery, Redshift, or Databricks)
- Familiarity with ETL/ELT tools (Fivetran, Airbyte, dbt, Apache Airflow)
- Understanding of data modeling, warehousing concepts, and best practices
- Experience with cloud infrastructure (AWS, GCP, Azure)
- Strong problem-solving skills and attention to detail
Bonus Points:
- Experience integrating data with automation platforms
- Background in analytics engineering or dbt
- Familiarity with real-time streaming (Kafka, Kinesis)
- Experience working with SMB clients or in consulting
- Knowledge of data governance and privacy requirements
