Upshift is a product-based company where diverse, smart, and passionate people embrace the power of technology to improve the way people earn and work. Since 2016, our revolutionary staffing platform is disrupting the 150bn recruitment industry, by actively easing the job-searching process.

Headquartered in Miami, FL, and backed by the largest job platform in the world, Upshift is currently operating in 20 cities across the US with 100,000+ high-quality workers verified on our app. The company has grown from three founders to 150 brilliant minds – and we are just getting started. Our sights are set on becoming the world’s largest HR Tech Company.

While struggling to make temporary staffing and recruiting efficient, we decided to take the faith in our own hands and we built the long-needed solution. We help our users find flexible job opportunities by connecting them with businesses that are in need of staff. With Upshift, people are able to decide where, when and how they work. They can combine different types of flexible work and develop multiple sources of income. Simply put, we make work better by helping everyone achieve their own goals.

You will be one of the first dedicated Data Engineers within the company, taking on the responsibility of re-defining the data pipeline and model, driving data analytics to a level of excellence, and building and scaling the data platform architecture from scratch. In this role, you will ensure high data quality is maintained throughout the product life cycle, and will play an important role in enabling the Engineering/Product Team to gain insight in a timely manner. The ideal candidate combines strong business acumen, experience in data pipelines, databases, along with a passion for tech.

  • Minimum 2 years of data engineering experience (building, working and maintaining data pipelines and ETL processes on big data environments and API based integrations).
  • Intermediate experience with any of the programming languages: Python, Go, Java.
  • SQL knowledge and database experience working on relational, dimensional and no-SQL techniques. 
  • Good understanding of Cloud platforms and common data (science) tools in AWS or GCP  including SageMaker, S3, EC3, Redshift, BigQuery.
  • Demonstrable experience with Git and agile development practices.
  • Strong written and verbal English communication skills.
  • Comfortable working in a fast-paced environment -  you have a general fear of being bored.
  • An ability to evolve and expand your own technical skill set in-line with emerging trends.
  • Collaborate with engineering teams to build, extend, and envision cross-platform ETL and reports generation frameworks to raise engineering standards.
  • Build, optimize, and improve the end-to-end workflow for data users at Upshift (including designing data structures, building and scheduling data transformation pipelines, improving access to critical data assets etc.)
  • Deploy and monitor data services as well as CI/CD workflows on top of our AWS infrastructure.
  • Stay informed on new, relevant data solutions and test the promising ones to make sure we don’t miss out on game-changing opportunities to boost our productivity and the enjoyment of our work.
  • Effectively partner with product managers, software engineers, and the entire cross-functional product team working towards top user value.
  • Ongoing professional & career development (an opportunity for level up every three months, as something defined in your tailor-made Career Progression Framework);
  • Career and technical mentoring is offered in order to enhance your skills;
  • Innovation space - an opportunity to book an eight-hour working day and spend time learning, reading, and upgrading yourself;
  • Awesome team to work with - where fun & work is always placed into the same sentence;

Our Tech Stack

Hiring process

Explore more

Talk to our recruiters

Since 2016, we are shaping the future of work.

One Shift at a time! 🚀

Join our team