Q: It seems like the world is focused on generative AI, but there are a lot of use cases around data and AI in general. Is that fair?
I 100% agree. It’s not about the technology—all innovation starts with a problem to be solved. When you look at opportunity that way, it’s important to focus on the “why?” Why would we do this? Why are we trying to solve a particular problem? What's in it for us? Where is there value? Answer those, and then we can start getting to the how.
For example, think about nonprofits and how they operate. Their challenge is around predicting member engagement or predicting donor engagement depending on their charter and their structure. So, being able to better understand the signals they get from their members through their behavior or demographics, can they understand if the members will renew their membership? And if they can infer or predict that they won’t renew their membership, can the nonprofit team design an intervention strategy?
Q: What big risks should people think about around AI?
The first one that people get concerned with is just access to data. If I am sharing data with an entity, whether that’s the computer or organization that controls it… I don’t necessarily want them to have full access to that data.
The example we talked about before with healthcare is illustrative. A lot of people can have concerns. The first risk is that many of these systems work based on access to massive datasets. How can you make sure you're governing that data access appropriately to avoid misuse or misappropriation?
Another big risk is the data we use to train these models. The underlying data may lead to bias. With AI, we may institutionalize that bias because we base decisions off of how we gathered the data, not necessarily what's appropriate or representative.
So it’s important to assess whether a dataset is appropriate to be used for a model or if it represents a specific bias that you wouldn’t want to be pervasive. That's a risk.
A third risk is around autonomous control, where you take humans out of the loop and the machines do more than you originally planned for. While that’s the most publicized worry, I think it’s a lower risk due to the controls we have as developers and consultants when implementing these systems. We can design around this risk. We just need to ask the right questions, assess what data the system is exposed to, and then decide what actions it’s allowed to take.
Q: One more question. How do you feel AI plays in the automation space?
You can look at companies like Appian and others that have applied AI. You’re not necessarily creating bespoke models, but you’re applying the technology in a low-code environment where you enable a citizen developer to take advantage of AI. That gives people a head start or leap ahead rather than having to build all this from scratch. They can say, “Hey, there’s a specific process we want to automate and we’re going to use some AI to take what was difficult before and help us make some of those decisions.” This approach makes it much easier to adopt than growing systems from the ground up.