Mid-17th century (in the sense "cooperating to produce a result") from modern Latin coefficient-, from com- "together" + efficient- "accomplishing".
We are driven by the challenge of solving real world problems by combining research-grade statistical techniques with a lean startup mentality and a technical skillset.
“Data scientists spend around 80% of their time on preparing and managing data” - Forbes
Good data engineering enables great data science. We help our clients get their data from source to store with a lean & clean approach to software engineering:
We helped Kingston Smith to automate parts of their audit process, including data ingestion, accounting quality checks and ETL into a SQL Server database.
We’ve built Franchise Partners’ data architecture from the ground up, automatically retrieving, validating and transforming a large number of “dirty datasets” with potentially market predictive properties for usage within both business intelligence and machine learning applications.
City Pantry needed routing software for their delivery drivers that could handle London’s unpredictable traffic. By leveraging open source components (Google’s OR-Tools & Django REST Framework) we were quickly able to build an effective and scalable drop-in replacement for their existing driver scheduling and routing systems.
“AI is ready right now to automate increasingly complex processes, identify trends to create business value, and provide forward-looking intelligence” - PWC
Process automation, trend detection and forecasting the future. Machine learning can do all this and much more! We’ve helped our clients to:
Predict a racehorse’s future value (Racing Post).
Backtest stock selection models against historical investment opportunities (Franchise Partners).
Forecast future conversion rates for newly acquired customers (Uncover).
Estimate heart rate variability from a wrist-worn device (Felix).
Predict and model the 2017 UK General Election (SixFifty).
Track football players at 60fps with deep neural networks (Hawk-Eye).
Training & Conferences
Coefficient has designed and delivered data training courses ranging from a zero-experience introduction to Python through to advanced machine learning, for clients such as BNP Paribas, EY, Hawkeye and the BBC. We also run public training courses.
John has run workshops and hackathons for IBM, PyData Bristol, the Royal Statistical Society, the PyData London conference, and Newspeak House. Previously he taught four semesters and numerous workshops as Data Science Instructor for General Assembly’s 11-week Data Science Course.
SELECTED TALKS & INTERVIEWS
PyData Bristol - Intro to Data Science.
Big Data Bristol - Big Data 101: Volume, velocity and variety for the uninitiated.
Extract Conference 2015 - Building Data Science Teams.
The Next Web - How data scientists are changing the face of business intelligence.
Big Data & Analytics Innovation Summit 2015 - Building production recommendation systems.
EARL Conference 2014 - Automating data workflows.
Contact us to discuss your needs
FOUNDER & PRINCIPAL DATA SCIENTIST
John’s experience in data science and software engineering spans multiple industries and applications, and his passion for the power of data extends far beyond his work for Coefficient’s clients. In April 2017 he created SixFifty in order to predict the UK General Election using open data and advanced modelling techniques. He is a Fellow of Newspeak House and co-organiser of PyData Bristol.
Previous experience includes Lead Data Scientist at YPlan, business analytics at Apple Inc., genomics research at Imperial College London, building an ed-tech startup at Knodium, developing strategy & technological infrastructure for international non-profit startup STIR Education, and losing sleep to many hackathons along the way.