It's been a while since we published our first post and as you can imagine a lot has happened since February. Many of you have asked us about the progress of our product and company, so we wanted to share an update on our current status.

We've moved to a new office - 📍PPNT in Gdynia, Poland.

While we enjoy remote work, building a product from scratch is much easier when we can work together in one place. Communication, decision-making, and building team-spirit seem to be achieved easier when everyone is in the same room.
And if you happen to be around, don't miss the chance to drop by our place!

We had a great workshop with Ariel, Avi and the Inovo Team.

‍Ariel and Avi spent a few hours with us and a few other promising founders to talk about B2B sales. We must admit that we learned a lot from their invaluable insights on acquiring first clients and we received many practical tips on running a company in its early stages. They also provided us with a lot of feedback (yes, they asked some tough questions), so we got a homework to do. On our train ride back home, we had no access to the Internet which turned out to be a perfect opportunity to rethink our approach to sales and product development. Let's keep our fingers crossed that we're now on the right track to move forward.
Thanks the Inovo Team for the invitation and hosting the event - we appreciate it! 🙌

We've joined Google for Startups and Microsoft for Founders programs.

While it may seem like a small thing, every penny counts when you're bootstrapping. This is especially true now, as we're in the phase of running performance tests on GPUs, and free credits taste better than ever.

Multiple predictions on one GPU at the same time? We've made it (easy)! 🎉

It required some effort (kudos Łukasz!), but now we can announce that our solution allows for dynamic splitting of the GPU into smaller bits, which can be simultaneously utilized by multiple models.
Why is this so important?
Let's focus on two main reasons:
→ Cost savings - you'll pay less for ML predictions because your models will receive precisely what they need to run predictions. Your infrastructure will be optimized to maximize usage. Zero waste. With our solution you will be among the super pioneers who can squeeze out every inch of the cloud resources and increase margins on ML operations. Additionally, when there are no requests for inferences, we turn off everything until it's needed. True scale to zero for MLOps.
→ Client satisfaction - you no longer have to worry about your clients experiencing long wait times for responses, even during high traffic loads. Our solution ensures a stable response time regardless of the volume of requests.

Hope you enjoyed reading this update.
And until next time! 🙂