Skip to content


July, 2024





The fashion industry is responsible for 5-10% of global greenhouse gas emissions (GHG). More and more companies are using Carbon Management Platforms to measure and create CO2 reduction plans. However, most of these solutions are generalist, so they are not tailored for a fashion brand, which requires detailed insights on the impacts of materials, manufacturing processes, or the “life cycle analysis” impact of its products.

This is why we developed Carbonfact, the only Carbon Management Platform dedicated to the textile and fashion industry. Our platform automates life-cycle assessment at the product level, enabling brands to gain a high resolution of their Scope 3 emissions and model out product-level change on the company’s broader environmental trajectory.

We raised $2 million from Y Combinator, Alven, and angel investors in 2021. Now, hundreds of brands and fashion groups are using Carbonfact (e.g. New Balance, Carhartt, Allbirds, Adore Me, Armedangels, Fusalp, Allbirds, Happy Socks, etc.). 

Data at Carbonfact

The Data team covers two jobs at Carbonfact: customer data parsing and analytics.

Customer data parsing

Our customers share data with us (bill of materials, product catalog, purchase orders, etc.). Our goal is to clean their data, and convert into our internal data model. The normalized data is passed on to our LCA engine, which measures the environmental footprint of each product.

This is a very challenging task. Each customer is unique, and therefore we have to build a dedicated connector for each one. The shared data usually has gaps, typos, anomalies, etc. We handle these edge cases to provide a delightful customer experience.

There are many opportunities to scale this task with technology. For instance, we strongly rely on Pydantic (v2!). We also have a simple NLP toolbox to normalize our customer’s data. We put a lot of thought into designing an internal toolbox that can handle many edge cases while being fun to use. We consider ourselves specialists at dealing with messy data, and our customers love us for this.

Building analytics

Once a customer’s products have been collected and measured, they are transferred to our data warehouse. The Data team is responsible for building analytics on top of this data. The grueling work of cleaning the products and measuring them has already been done, so this task is more straightforward. Our analytics logic is much clearer than our parsing logic

The analytics we produce is consumed internally, but is also customer facing. This is powerful, because it puts the Data team at the core of Carbonfact. Indeed, Data is very much a central theme in how we operate, and not just a support role.

We have a nice and simple reporting stack which allows us to scale this task quite well. It is based on BigQuery + custom dbt + Observable. Our analytics stack is very much SQL-based

The data you’ll be working with

From a data science perspective, what’s interesting is that the data we work with describes physical objects. Be it shoe outsoles, or t-shirts, or packaging boxes, or cargo ship containers for transport: it’s real. Our intent is to capture the physical reality of how clothes are made and end up in everyone’s home. We even get down to measuring the energy mix of factories along the supply chain.

Working at Carbonfact as a data scientist is a unique opportunity to apply your knowledge to something that matters. The ultimate goal is to reduce the environmental footprint of companies in our portfolio. It is very much a data science problem that is looking for bright minds to solve.

What we’re looking for

  • Excellent communication skills in English. You’ll be facing customers on a regular basis, discussing technical concepts. It’s important for you to feel comfortable regarding this aspect. For instance, maybe you have some consulting experience where you were customer-facing.
  • Experience working with heterogeneous data. Customer data is like a box of chocolates: you never know what you’re going to get!
  • Knowledge of basic NLP techniques (regex, typo handling, normalization, …). Our current NLP toolbox is simple and yet sufficient, though we may at some point introduce some sophistication.
  • Applied Python skills – our internal parsing toolbox is written in Python. We expect you to have the motivation to make it better, and not just use it.
  • SQL skills – almost all our data analytics is done in SQL. Ideally you are aware of the analytics engineering ecosystem, and take some interest in it.
  • Basic SWE skills – clean code, unit tests, version control, etc. Here clean code matters a lot, because it’s directly affecting our customers. Being able to debug it one year later is a must.

☝️ We recruit people, not roles. If this sounds like the kind of person you want to grow into, then please feel welcome to apply. For instance, Max wasn’t very customer-oriented at first, and grew into it with time.

Work environment

  • You can read more about our 5 principles here.
  • You will work closely with Max (Head of Data), Martin (CPO), and Angie (Head of Customer Success)
  • You can work remotely, as long as you’re based in Europe
  • We pay for coworking spaces up to 300€/month.
  • We cover the usual modern amenities (MacBook, headset, ChatGPT subscription, Github Copilot, etc.)
  • We'll cover 100% of your health insurance with Alan at the best coverage level
  • We have an office in the 10th arrondissement of Paris, where you’re welcome to join!
  • We organize work retreats 3 times a year
  • We determine the compensation package (salary + equity) based on an internal grid which is fully transparent. At the time of hiring, we’ll determine your level based on the position, your track-record and experience. You will then be promoted to higher levels based on your performance and your impact on the company. Each level is associated with a predetermined compensation.
  • For this position, we are looking for candidates with 3+ years of experience. You can expect a salary between €60k and €80k depending on levels. You can also expect significant equity with employee-friendly exercise rights.
  • We very much believe in open source software – e.g. lea

Application and interview process

Please send an email application to You may include:

  1. Your track record
  2. Your portfolio/GitHub
  3. A few lines about yourself 

If there is a fit, we will share some availability to do a 30 minute exploratory call with an hiring manager. 

If the exploratory call goes well, these are the following steps:

  1. 1 hour live data modeling test
  2. 1 hour live text processing test
  3. 1 hour live analytics test
  4. Principles fit with one founder
  5. Reference calls
  6. Take-home case study with live restitution

If you click on “Accept all” you agree to the use of these cookies. To find out more about the cookies we use, see our Privacy & Cookie Policy. Or, you can continue without agreeing .