A conversation with my friend Ryan about business turned into a conversation about the future of interfaces. Which is a topic I’ve been writing and talking about because I’m very bullish over the next five years of voice as the future an interface.
Over our dinner, Ryan brought up a good point: the role of beacons. I loop beacons and sensors together because I ask myself, “How do I detect?” A beacon or a sensor detects some action happening and therefore we can use that as a trigger to do something else.
Ryan is working on a personal app for the home and asked how I would automate my home. Personally, I think it would be done with a mixture of beacons/sensors and voice. I’m not sure everything can be done with one or the other.
Even if it could be done with only one, which would be the best interface?
For example, if my fridge has no more milk- I wouldn’t want to voice activate and say, “Order some more milk.” I just want to order milk pos it’s own by knowing I’m out of milk. But then in the morning if I want to know the weather I’d say, “Alexa what’s the weather?”
I think there’s a role where a “connected home” is a mix between a personal assistant with voice and having some sensors to detect things.
Such as, right when I wake up, I’ve triggered it so 10 minutes later it says:
“Good Morning, Anthony. Your first meeting’s at 10 a.m. Here’s the weather today. Here’s how much time you have before your meeting. If you want to go to the gym you need to leave in 10 minutes.”
And so on. It’d be like a guided assistant. That would be pretty cool, right?
For sensors, it wouldn’t just have to be my fridge. Say I’m running low on shirts. It knows I have nine dress shirts in my apartment and I only have three left, so it sends me a suggestion to get my shirts dry cleaned.
Think about your cell phone- It’s great, but is it really the best interface?
Why do I need to touch something?
Why do I need to go to an app to check the weather?
Why do I need to go to an app to order food?
These should be instantaneous.
Taking this a step further: when I think something, I want it to happen.
What happens when you’re using an app? You think something, go to your phone, and you take action. Ryan and I questioned why it even has to go there. Even for voice, I think something but have to say it in order for it to happen.
Right when I think something, it should happen. That’s in the future, in the meantime, it’s about how to reduce the amount of time to do something.
Technology should be getting out of the way.
Say I have a morning meeting in Soho. Great. Now I have to think about how long it’s going to take me to get there, how long it’s going to take a car to get to my apartment, and what time I need to leave. I need to spend minutes just figuring out when I need to f*ing leave.
That shouldn’t happen.
What should happen is: my appointment is at 9 a.m. in Soho and this is where I live. The voice activated system then tells me I need to leave in 10 minutes and the car is already ordered.
Technology is shifting. It’s why I love what we do at Jakt and why I built this company. Technology will be the core of everything as it changes. Every industry is going to use technology. It will keep evolving and we are going to keep changing the interface we use as a result. As a company we started with web, and then we moved to mobile. I think voice and sensors and the like are where it’s going so we are going to be playing in that field.
I love that other people in the company are thinking through these things as well. Jakt keeps evolving but we are still applicable to every industry.