+++

RALEIGH — Following Diveplane’s recent announcement of their rebrand to Howso, I had a conversation with Dr. Chris Hazard, co-founder and CTO, about the company’s open-source AI focus, and the state of AI transparency and governance.

Dr. Chris Hazard

Dr. Chris Hazard, Howso co-founder and CTO

This interview has been edited for length and clarity.

TechWire: You’ve recently changed names from Diveplane to Howso. What was the intent behind this pivot and how has it changed the company’s priorities?

Dr. Chris Hazard: When we initially started the company, we were doing explainable, understandable AI the right way. Unfortunately, it’s kind of like health food; people like say they want it but they don’t want to pay for it. So we pivoted into synthetic data where we were getting very, very good at being able to make data that has all the insights but none of the original data so you can share your data privately and safely.

But our whole mission has always been ethical, understandable AI everywhere, and we feel like the time is right. We’ve known since the inception of the company that long-term, we would always be open-sourcing. That’s really the only way to be able to see every step of the way through the process. So the rebrand is really just the realization of the overall mission.

TW: What’s behind the name, “Howso”?

CH: We feel like it really captures the mission and the idea better. So when we say, “Okay, here’s, here’s a prediction, here’s an estimation, here’s an answer,” someone can ask, “Well, how so?” And we have an answer for that.

TW: Your last fundraising was your Series A last year that netted $25 million. How are you doing for funding? When do you think you’ll need to raise again?

Diveplane adds $25M Series A as ‘rocket fuel’ for the Raleigh-based AI startup

CH: Being a startup, there’s always the next round. You’re always thinking about that. But we’re in really good shape right now. We’ve been slowly hiring throughout the summer. We’re now just shy of 30 people and still growing.

TW: With open source, there’s often an issue around how the company makes money. Can you talk a little bit about the more commercial assets that you have?

CH: One of the things we were thoughtful about when we launched as open source was that we can both contribute to the community, but also can be a viable business as well. We’re open-sourcing all of the technology that a data scientist might want to use to build an ethical, understandable AI system. You can do that today on your laptop; it takes about five minutes to get it up and running. We want to make that available to academics and people who are doing small projects.

But the moment you’re like, “You know, I would like to connect this with other large enterprise systems and do integrations. I want to do things at scale,” that’s the commercial side of the business. We can solve those sorts of things.

TW: What’s one of the biggest concerns you have with respect to how AI is being discussed and explained in the media?

CH: I think when you hear about “explanations” or “interpretability” of AI, those terms are being diluted a bit. Historically, an explanation is a reason. It could be right or it could be totally wrong. Interpretability used to mean you can actually see all the steps along the way, but they’re almost becoming interchangeable.

So we’re calling it “understandable”, a sort of way to anchor the concept. A subject matter expert in their domain, who knows almost nothing about AI should be able to look at the output, understand it, and communicate that to another subject matter expert. It needs to be human understandable. And then, on the flip side, somebody who has a little bit of math background, could from the data, trace it through all the way to the end. We’re pushing in that direction of deep and rich understandability. We’re going to keep talking about that.

TW: Your Howso engine is, as you say, understandable. You can trace responses and account for influence. What’s the technology behind that allowing for more transparency?

CH: Traditional machine learning happens when you get data and you build some model that looks at the data and you do something thing with it. Instead of that traditional machine learning, we use instance-based learning (IBL).

With instance-based learning instead of building a separate model, your data is the model. So when you have a question, what you do is you find the data that is most relevant and most similar, and then you interpolate.

Classic algorithms that have done this in the past aren’t very accurate, they don’t scale very well, etc. But in the last 15 to 20 years, we and others have made some major breakthroughs in math. And now we can actually give a probability of similarity but also understand the uncertainty around it. One of the neat things about our techniques is they’re self-calibrating. When you hear about hallucinations and large language models saying, “Well, here’s some answer,” it’s always the same level of confidence. Whereas with the techniques that we have, the response may be, “Here’s the answer and it’s almost certainly correct,” or “I have no idea. Here’s an answer. But if you teach me more, if you give me more data like this, I might be able to improve answers.”

TW: There are other “open source” AI models out there like Llama 2 and frameworks like TensorFlow. Can you differentiate Howso from those other solutions?

CH: Yeah, so if you really think about it, what does open source mean? It’s not just about what you can use it for, but also how transparent it is. Can you fix it? Can you debug it? If you’ve got a framework that is open source, but built with Blackbox models, is that really open source?

With our software, the results that we give can be traced from the data all the way to the decision in a small number of steps and you can see exactly how it was computed. So I would argue that in many regards we’re more open source because we meet those open-source ideals.

Raleigh-Durham Startup Week kicks off with AI & unintended consequences, winning government contracts

TW: Do you have any thoughts on the work that Congress is undertaking with respect to AI?

CH: There’s a lot of noise in our country and around the world about regulating AI and trying to do it right. I think we’re all trying to figure it out. The current administration has non-regulation regulation that directionally is pretty good, but obviously, it has no teeth. And if we develop legislation around these Blackbox systems addressing their techniques it’s sort of losing sight of what could be possible with other techniques like ours.

So I’m more of a proponent of focusing on data governance and privacy and those sorts of aspects. Because really, AI is not much without data. Regulating data, the right-to-be-forgotten rules in Europe, and really looking at the incentives around collecting and using the data. I think that is a more useful path and it’s more general as opposed to legislating a technique that might be outdated in three years.

TW: What does success look like for Howso? Where do you want to be in five or ten years?

CH: We’re really going full tilt at Blackbox AI systems. Our company’s mission is to replace those neural networks. They’re great. They’ve had a lot of amazing success. They’re very useful for some things but it’s virtually impossible to really know the whys and understand how things are happening. And maybe that’s okay for some things, but I’d argue not many.

So I’d say in five years we should be, ideally, in hundreds of institutions. It’d be a standard tool in the data scientists toolkit, and continuing to grow as a company.  I see long-term success as either us or somebody else building chips using this sort of technology. Look at all the money being put into chips for other forms of AI. And then the really long-term goal is we’d like to replace neural networks. Build up all the capabilities that neural networks have, all the successes but from a different starting point. Bring a different, much richer capability for understanding and debugging.