Introducing Whoosh

cover

Background

Artificial intelligence technologies are advancing fast. Yet, companies struggle to bring AI ideas into reality. For companies to be competitive, hiring data science and machine learning teams is no longer enough. Attracting ML talent is one obstacle that companies struggle to overcome. Turning prototypes into production is another.

Most ML models stall before deployment

Data scientists, researchers, developers, and engineers struggle to deploy machine learning prototypes into production. According to Alegion, 78% of ML projects stall before deployment. While software deployment has come a long way, machine learning workflows and deployment best practices are still in their infancy.

ML model deployment is slow

Machine learning practitioners are overly confident in their estimates to launch AI applications. According to Algorithmia, 40% of companies said it takes more than a month to deploy an ML model into production. The more time is spent on configuring infrastructure code to ensure the model is scaling, the less time is spent on solving industry problems.

High maintenance costs of ML systems

Maintaining machine learning systems is a burden. ML practitioners underestimate how much time is spent on server maintenance. According to Stripe, developers spend 42% of their workweek on code maintenance. Developers spend over 17 hours each week dealing with maintenance issues and about 25% of their time fixing bad code. That's $300 Billion in lost productivity each year. Companies must realize that it's not the size of the engineering team that counts, it's how their time is being used.

High infrastructure costs of serving ML models in production

Running machine learning models in production is expensive. According to GennovaCap, companies pay up to $20,000 per month for Amazon Elastic Compute Cloud (Amazon EC2) servers.

What if there was a way to reduce server costs while simplifying ML deployments? I'm here to share with you a better way.

Introducing Whoosh: Modern Machine Learning Hosting

Whoosh is a modern machine learning hosting platform. Whoosh makes it easy to build and deploy ML models into production. Machine learning practitioners focus on writing code and Whoosh takes care of the rest.

Developers use Whoosh to rapidly experiment with prototypes. Teams use Whoosh to transition from prototypes to production-ready applications.

Whoosh makes it easy for data science and engineering teams to get started with AI applications and ML systems. Whoosh allows teams and developers to deploy ML models into production using one-click deployments.

Whoosh is the Netlify or Fleek of AI application deployment. AI practitioners focus on the code, not managing servers.

Whoosh is accelerating the adoption of machine learning models. Whoosh is the catalyst for an artificial intelligence economy.

Features

Deploying machine learning models is easy, thanks to Whoosh. Whoosh streamlines git-based workflows that developers love to provide build, deploy & host, and serverless functions to developers at scale.

Build

You first define what does model prediction looks like. Whoosh allows you to define prediction APIs for any model, regardless of framework, using an intuitive interface.

Whoosh manages all build dependencies to ensure ML model compiles and runs as expected. When you trigger a build on Whoosh, our bot starts an AWS Lambda function to build your model. Before running your build command, our bot will look for instructions about the required languages and software needed to run your command. These are called dependencies, and how you declare them depends on the languages and tools used in your build. At Whoosh, we are launching with support for Node.js and JavaScript, and Python.

Whoosh acts as a tool for continuous integration (CI) thanks to Github. Whenever developers make changes to the code, Whoosh analyzes those changes and re-deploys the machine learning models where necessary.

Deploy & Host

With Whoosh, you configure deployments with YAML (YAML Ain't Markup Language). Whoosh uses declarative YAML to configure deployments. Stop worrying about Kubernetes, Docker, and model servers. Whoosh uses serverless architectures to manage servers for you so you can focus on ML code.

Benefits

ML deployment simplified

Whoosh makes deployment easy thanks to one-click-deploy. Whoosh simplifies deployment, so developers can focus on what they do best - writing ML code. No glue code is required.

No server management for ML systems

With Whoosh, developers stop worrying about servers maintenance, upgrades, or patching. All infrastructure is abstracted behind Whoosh. ML infrastructure is managed for you.

Production-ready ML models

Deploying machine learning models into production shouldn't be hard. With Whoosh, developers deploy ML models using serverless architectures and serve the first prediction within minutes.

Make ML deployment affordable

Behind the scenes, Whoosh takes your machine learning models and serves them on auto-scaling infrastructure. Once your model is served by Whoosh, you can guarantee that it will stay online, no matter how many connections or how viral your application goes.

Works for all popular ML frameworks

Whoosh works with popular machine learning frameworks, libraries, and tools to deploy any model into production. Whether you are developing a model using TensorFlowKerasscikit-learnPyTorch, or Apache Spark, Whoosh serves each model as web APIs, so the rest of the development teams can quickly integrate intelligent predictions into their applications.

Gives back 42% of the time to developers

Whoosh frees up the developer's time. When developers are not required to manage servers, they are free to pursue other projects or complete other more important tasks. Freeing up developer's time is essential for companies wanting to extract maximum value from projects. Freeing up developers to pursue more important projects, rather than managing servers, allows companies to generate more revenue and be more competitive.

Lower cost than deploying ML models to traditional servers

Whoosh manages servers for developers. While traditional servers incur massive inference costs, the serverless architecture allows Whoosh to deploy low-cost AI systems with auto-scaling built-in. Thanks to serverless machine learning, Whoosh stands on the shoulders of giants - Amazon Web Services (AWS).

Deploy AI systems at scale

Thanks to Whoosh, you can host your machine learning models in the cloud. Whoosh provides an intuitive interface for deploying ML models as APIs. Best of all, Woosh automatically scales to handle production workloads.

Attending a conference and want to showcase your work? Whoosh has you covered. Launching a new feature, powered by deep learning technologies? Whoosh has your back. Decided to share your work on RedditHackerNews, or ProductHunt? Whoosh will scale to make sure you are riding the publicity wave.

Frequently Asked Questions

How does Whoosh work?

Thanks to Whoosh, you can deploy your model in minutes. Building and hosting ML models is done in three steps:

Step 1 Connect your repository

Whoosh detects the changes to push to git and triggers automated deploys.

Step 2 Add your build settings

Whoosh provides you a powerful and customizable build environment.

Step 3 Deploy your model

Woosh provides serverless deployments thanks to serverless machine learning architectures.

Why use Whoosh for hosting ML models?

Git Integration

Whoosh integrates with Github to empower developers to adopt git-based workflows.

Auto Deploy

Whoosh watches your code and triggers build scripts to ensure you are always serving the latest models into production.

Auto SSL

Whoosh configures your model to ensure that data is always secured by HTTPS (Secure Hypertext Transfer Protocol) and SSL (Secure Sockets Layer).

Blazing Fast

Whoosh is blazing fast. How fast? All you hear is the sound Whoosh when deploying ML models into production.

Easy to Use

The model deployment should be made as simple as possible. The user interface behind the deployment process should facilitate and guide you towards serving models into production. Luckily, Whoosh provides just that.

Collaborative

Collaborating on ML models is essential to ensure that developers and ML practitioners are catching code errors and implementing industry best practices. Collaboration behinds with code review and code sharing. Whoosh integrates with Github to ensure that developers are always in sync when it comes to code. Collaborating on ML systems is as easy as inviting team members to join the project on Whoosh.

Customizable

Customizing ML systems to ensure optimal performance is easier than ever. Once the developer specifies build preferences, deployment is taken care of by Whoosh.

Next Step

Interested in building the future of machine learning hosting? Contact me about Whoosh.

Created by Slava Kurilyak (slavakurilyak.eth)