Uncategorized Archives - Friend Michael - One Big Experiment https://friendmichael.com/Categories/uncategorized Father, husband, geek, entrepreneur, creator. Thank you for being here. Tue, 29 May 2018 14:56:00 +0000 en-US hourly 1 https://wordpress.org/?v=6.7.2 Wireless consumer VR: slip it on and Go. Anywhere. https://friendmichael.com/Blog/wireless-consumer-vr-slip-it-on-and-go-anywhere.html Tue, 29 May 2018 13:19:13 +0000 http://www.friendmichael.com/?p=442 It’s been several days now with the Oculus Go. I find that I’m spending time in it… many hours per day. It’s quite a device for a $199 entry point. Add a decent pair of headphones and the value is pretty unreal. Keep in mind, this is $199, –> all in. <– No PC required, no phone, nothing extra. That’s it.

I live in 350 sq ft. with my wife, daughter, and two dogs. It’s nice to be able to zone out and be in my own space without having to be tethered to the PC and the Samsung HMD Odyssey. I’ve even used it outside in a camping chair.

My current usage patterns suggest that it’s a replacement for using Facebook, Instagram, Reddit, Flipboard, and so on on my iPhone X. I set up a couple of web based Google Mail accounts too. It’s remarkably usable for these things. I have bookmarks set for all of them, so they’re just a click away.

As far as VR experiences, there are several things I keep going back to. Wonder Glade has several mini games. For some reason, I really enjoy the basketball and mini-golf.

Proton Pulse is a great breakout/bricks type game apparently made for Gear VR as it uses head motions, not the controller. I expect that’ll be updated, but it’s well worth the $2.99.

A couple of other interesting things: Mondly (interactive language practice) and MelodyVR (360° live concerts with multiple camera positions).

I also love that Altspace is here. That brings the promise of social VR to an untethered, inexpensive headset. I haven’t tested all of the games, but being able to play with others, cross platform, is intriguing.

I haven’t test the party feature yet. I have a few friends with Go, but if you’re ever online at the same time I am, I’d be happy to give it a shot.

Of course the consumption experiences are great too. Hulu, Netflix, Amaze, Gala… they all do exactly what you expect.

There are some things that would make the experience better, but they’re certainly not show stoppers. Copy and pasting text, a “right click” somehow in the browser, pairing of other Bluetooth devices (keyboard, mouse, headphones), and a way to view a computer’s screen interactively. Think Bigscreen, but two way.

Imagine setting up a virtual server at Digital Ocean with Ubuntu, and being able to control that machine from your Go, anywhere with WIFI. I’d love to use this for work, but like with VR in general, this is still a wide open area for devs to tackle.

More soon.

You can pick one up at Best Buy, or follow this link to Amazon. It is an affiliate link, so if you make a purchase there, Heather and I will receive a small percentage of the sale.

]]>
How I’m overcoming commitment debt https://friendmichael.com/Blog/im-overcoming-commitment-debt.html https://friendmichael.com/Blog/im-overcoming-commitment-debt.html#comments Mon, 12 Jun 2017 01:22:34 +0000 http://www.friendmichael.com/?p=354 It’s time to come clean:

You’ve probably noticed that I’ve been scaling way back on commitments for the past 6 to 9 months. From a hyper-reduced role in the Dallas Startup Community, to stepping down at Launch DFW, to fewer interactions with the City of Dallas, and even fewer within my own neighborhood (currently, The Cedars) and our neighboring communities.

I may have been “doing a lot of things” (and you bet I loved them all), but I did none of them with the focus they needed and required. The commitment was there, the excellence was not.

Over the past few months, Heather and I have been working on what’s next (you all know it as Epic Mini Life), and I’ve begun to dig into inboundgeo – deeper than I have in a long time. This is all according to plan, nothing new here. I do two things: inboundgeo and Epic Mini Life.

What I didn’t account for in the the plan, is how many plates I’d let fall, and how quickly their shattered remains would pile up. Instead of weaning myself and correctly setting expectations with myself and the community, I gently opened the door, stepped through – turning around ever so quietly, listened for the click, stepped backwards… and walked away.

This has had several consequences, and not all bad frankly. I’ve chosen to focus on my business above all others. This is new for me, and it’s been enlightening. But I’ve let people down, and that feels shitty.

Here’s where I am today:

There are zero emails in any of my 7 accounts. I’ve moved all of them to tasks in Asana (again), and have filed or deleted the rest. They’re requests for intros, requests for product feedback, advice, inboundgeo feature requests, and all kinds of things.

I will work my butt off to maintain inbox zero from here on out. That means that any requests of me will either be replied to immediately, or they’ll be converted into tasks to be dealt with at a later time. I’m setting aside 1 hour per day to get through as many 5 minute tasks as I can (things I think will take less than 5 minutes, but require more than a quick/terse reply). If it takes longer than 5 minutes, it will be dealt with as a scheduled item.

Here’s a new idea… I’m considering including person that made the request as an invitee in the task in my calendar. Let’s say I think it’ll take a couple of hours to complete. I’d include you as an invitee when I schedule the task. You can decline it (or accept) but at least you’ll know when I plan to work on it. What do you think?

Here’s my request to you:

If I’ve let something fall through the cracks – intros, feedback, phone calls, etc., reach out and remind me: msitarzewski@gmail.com. I’m happy to do it, and I hope this new system will allow me to stay on top of these things moving forward.

As always, thanks for your patience and your confidence.

Be you, and kick ass at it.

]]>
https://friendmichael.com/Blog/im-overcoming-commitment-debt.html/feed 2
The truth about why I’m leaving the Dallas Startup Community. https://friendmichael.com/Blog/truth-im-leaving-dallas-startup-community.html https://friendmichael.com/Blog/truth-im-leaving-dallas-startup-community.html#comments Sun, 26 Mar 2017 15:55:15 +0000 http://www.friendmichael.com/?p=337 It may come as a bit of a surprise to hear that North Texas’ number one startup community evangelist is leaving the region. It’s true, we’re moving… but “why” is not the most shocking part.

DFW Nouveau. 2013 to Present.

You’ve more than likely been a part of an event I’ve led (Dallas Startup Week, Dallas New Tech, BigDOCC (the 8 other spinoffs technically count as there were zero when I started the first two), Ignite DFW, Player’s Lunch, the “tunnel tour,” or you’ve at least heard my name attached to DFW and startups. It’s appeared in Dallas News, D-Magazine, Dallas Business Journal, Launch DFW (of course) and many others outside of the region. I’ve mentored and judged at The DEC, Startup Weekend, Lean Startup Machine, and dozens of other events.

None of this happens in a vacuum. When I first arrived in 2013, remarkable people welcomed me. Gabriella Draney Zielke started it all, Trey Bowles, Jennifer Conley, Joel Fontenot, George Barber, Matt Himelfarb, Matt Alexander, Pam Gerber, Daniel Oney, and many, many more helped the new guy from Boulder understand what was here, and who was doing what. That’s community. Every one of them: “How can I help?”

And that’s the “startup” side of my life. I’ve also been entrenched in the homelessness conversation: a dozen 40+ people meetings at Dallas City Hall that produced the Commission on Homelessness, and of course Dignity Field. I was the President of the Cedars Neighborhood Association (2015-2017), and routinely meet with people about my ideas in solving poverty issues. That too has landed my name in the press.

But that’s 2013 to present. To understand why I’m leaving you have to understand the full story. Some of you have heard this, hang in there, I’ll make it quick.

Early Dallas: 1994 to 2006

My good friend Bracken and I built several internet things in Dallas in the 1990’s: Apartments On-Demand (1994), Coupons On-Demand (1995), Classifieds On-Demand (1996), and finally sold one in MeetMeOnline.com (1997-1999). We did this with no support, no formal education (business, technical, etc.). In fact, we didn’t know a soul building anything like this in the 90’s. It was just us, building. I also ran Intelligent Networks, and zerologic corporation – both Apple related technology consulting companies (1993-2001). There are at least a dozen other experiments that never succeeded/got traction.

Boulder, CO. 2006 to 2013

While building HyperSites (in Dallas, 2001-2007), we decided to move the operation to Boulder, CO. We’d end up selling it in Boulder in 2007 (coincidentally, to Dallas based investors). That’s an important point, but the Boulder story doesn’t end there. Later came Callisto.fm (2010 to present), which evolved into Epic Playground (and MediaGauge). I also dabbled with GrillM (2009), Michael’s Garage (videos produced in my garage on how to build PCs from scratch), four podcasts (Boulder Open Podcast, Three Insight, Blipcasts, and OS Perspectives) and produced This Week in Techstars w/ David Cohen. I took over BOCC (2010) and started DOCC (open coffee clubs).

But Boulder was different. The power and confidence of being a part of that community was something that I hope everyone feels at some point. Sure it had its pain points (right Andrew?), but over all the experience was like getting a PHD in “startups.”

In fact, Andrew Hyde is one of the most influential people in my life. He gave of his time and energy constantly to help foster the very things I remember as great. He started Startup Weekend. By that, I don’t mean Startup Weekend Boulder. I mean Startup Weekend, period. He launched Boulder Startup Week, which I’d later implement in Dallas, and hundreds of others would all over the world. He also ran the largest Ignite event ever, in Boulder. But I digress.

Techstars would have a tremendous impact as well. Not just because two of the founders had committed a little money to the HyperSites round if we could get a lead (didn’t work out), but because that accelerator would bring in 10 new teams to Boulder every year, feeding the ecosystem with new blood. Eventually, it would have a more direct impact as my team and I went through Techstars Cloud in 2012.

Exodus 1.0

Over the course of the seven years in Boulder, several of its high profile members would leave – Andrew Hyde, Matt Galligan, Micah Baldwin, Rachel Ryle… and many more. Many of the teams that came in for Techstars would leave too, going back to their home towns, or on to other adventures.

How does the community respond with changes like this? There’s the natural “OMG, everyone’s leaving! What are we going to do!?” reaction. There’s the “I guess they weren’t committed to the community, man!” response. And the “Who needs them anyway, this place rocks!” response.

Something remarkable happens in a strong community though, as we’d come to find out. Other people step in, and step up. People that have played a role increase their visibility, and become the next change agents. New events, new relationships, and new opportunities for serendipity. Growth happens.

Today.

Instead of casting any doubt on the state of the DFW startup community, I’d encourage you instead figure out how to step up and take an active role in building the next version. Don’t just go to events, participate. Don’t just talk about a startup idea, build it. Don’t complain about things, take actionable steps to fix them (see The Five Why’s). Every strength and weakness in this community starts with you, dear reader. Be a part of something. Make it better by participating. Reporters/journalists, focus on the great things, and not the obvious drama… we need more from you. Use your power for good.

Back to us, and the fact that we’re leaving Dallas. The “why” is actually quite simple. Frankly, it has nothing at all to do with the Dallas Startup Community, and has everything to do with the fact that Heather and I want to do something epic. We want to travel the country in an RV for a few months, to experiment with a truly mobile lifestyle. We want to build a mini (550 sq. ft.) home by hand, and we want to be near Disney World when we do it. Remember, Heather is a Disney travel planner. But the bottom line is that we want to get the most out of life – today.

Heather and I wish you the best, and we’d be thrilled to have you along for the adventure. If you’ve ever dreamed of selling everything and hitting the road… follow us as we do exactly that: EpicMini.life. It might just inspire you to do the same. 🙂

]]>
https://friendmichael.com/Blog/truth-im-leaving-dallas-startup-community.html/feed 12
How much does it cost to live “off the grid?” https://friendmichael.com/Blog/how-much-does-it-cost-to-live-off-the-grid.html Mon, 06 Feb 2017 04:01:10 +0000 http://www.friendmichael.com/?p=299 Heather and I are fascinated with the idea of living off the grid in a much simpler, much less “distracting” environment. It’s not just to get away from the noise of the city busses and semis that whiz by just 30′ from our master bedroom window, or the random stranger passing at 3am, singing as if practicing a serenade, secluded in a steamy shower. But the noise and distractions of “modern” life. We want fewer things, highly intentional things, well thought out spaces, and land. Land for growing, for grazing, and for simply enjoying the evening sunset.

This is so weird. It’s abnormal. Maybe even impossible? No, but this lifestyle is in perpetual conflict with my desire to live high above, but directly connected by roots to the 24/7 vibe of the urban core. Walk, bike, use transit for the long haul trips. Everything is now, delivered to my doorstep, or streaming directly to my retina – by way of a fully immersive VR headset. That is equally attainable, in fact. But I digress.

What follows is a hypothetical recipe for achieving an off-the-grid life. To be clear, we have not done this, I’m interested in all of the feedback, however. Because, one day, we will.

1. Land
2. hOMe
3. Solar Power
4. Water
5. Food
6. Cooking

Land

I’m amazed by the surplus of remarkably inexpensive land across the country. These properties range from $10k to $50k, are between 2.5 and 4 acres, and contain the word “mobile” in the text, meaning they’ll likely allow a tiny home.

This spot is golden. It’s 3.8 acres of densely forested land near a lake, and a 15 minute drive from “town,” and it’s a remarkable $17,900. Yes, 3.8 acres for $18k.

The hOMe

The new hotness is the THoW (Tiny House on Wheels) – homes built on trailers with two or more axels – intended to be towed by a vehicle. They’re closer to RVs than a traditional mobile home (or double-wide). There are many differences, the details of which are far beyond the scope of this post.

And “tiny” itself doesn’t imply wheels – “tiny” can be the smallest permanent fabrication with just enough room to kneel and sleep. There are some simply remarkable builds in this style.

There are two primary ways to acquire a tiny home. First is to buy one outright. This can be a preplanned home, or used, and there are many options with each. If you’re going to order one, for the sake of this post, it’ll need to be designed for “off-the-grid” use.

The hOMe model by Tiny House Build is a fantastic floor plan. You can buy on the fly, or buy plans and build it yourself. The model in the video took the owners 4 months to build from scratch. Details: $33,089.72 221sq ft, plus two lofts 128sq ft. The full specs and plans are available here. This is an example of a modification to the hOMe, shown on FYI.

Solar Power

Off the grid means no city/county provided services. Power is the number one concern for many people looking at this lifestyle. There are many ways to reduce power consumption, and keep in mind that stoves in most tiny houses are propane. Add a wood burning stove and you can reduce power needs even more. Sure, $12,105.00 is a little on the high end – others have done it for less.

We won’t have a microwave or any appliances that will use as much energy, so I anticipate that our power requirements will be lower than the above systems can generate.

Water

Cistern tanks are the way to store water. The source of the water could be simple rainwater, a well fed system, or even delivered. Jesse & Alyssa, a couple in the northwest went through several iterations of storing water. Spend some time reading their posts. If someone else has done it, leverage their experience. Their blog is Pure Living for Life. Jesse said that you can expect to spend about $1 per usable gallon. I’d estimate about $1,500 for us.

If you’re capturing rain water or using creek water, you’ll need to filter it before using it for cooking or consumption. I’ve seen the Berkey Countertop Water Filter ($288.50) system in many tiny home builds. There are lots of things to consider with regards to water… but this is one of the tops if you’re not using city water.

Food

We’d have plenty of space, with rich soil to plant trees, veggies, and whatever else will grow. We’d want a greenhouse for year around needs, but the land would be used when available. I’d like a few free roaming chickens for all of the obvious reasons. We have one pescatarian, and no vegans in the family.

While I can’t think of anything else we’d require, bartering with neighbors is an option, and being close to a grocery store will make up the difference.

Cooking

Propane is pervasive in the tiny world. From full size to RV to single burners, the choices are nearly endless. If you have grid access, the sky is actually the limit – microwaves and toaster ovens included. Off grid choices are interesting too. Wood burning stoves provide some cooking capabilities, but think really far outside the box – to solar ovens and fire pit cooking.

]]>
The home theater of the future is smaller and faster https://friendmichael.com/Blog/home-theater-future-smaller-faster.html Sun, 29 Jan 2017 23:42:52 +0000 http://www.friendmichael.com/?p=288 Drop by any consumer electronics store, and see what TVs are selling best. According to the CEA (Consumer Electronics Association) “Sales of super-sized TVs are up 50 percent in the past year, as prices on behemoth flat panels have dropped.”

The 65 inch range is great, and, in fact is what we have in the living room. Ours is 8 years old, plasma, and weighs about as much as a full barge on the Mississippi, but I digress. The size of the TV is a great match for the room.

Today’s consumers demand larger, higher resolution screens, to replicate the 100+ year old movie going experience. Thinner, lighter, and with internet connectivity and apps. You want all of the latest apps: Netflix, Amazon Prime, Hulu, and it would be idea to have a complete operating system for the ultimate in expandability. Content is king.

But a change is coming. I’m not talking 3DTV, or the gimmicky curved screen tech of the past few years. Those are just micro iterations on the same old technology. I’m talking about something as big as the jump from VHS to 4k just in time streaming through Netflix, Hulu, and YouTube.

What if the best screen for the single person’s living room of the very, very near future was just 10% of the size of today’s best selling TVs?

Wait. Ten percent? 6.5 inches?

Let me introduce the living room of the future. Today’s TVs have a viewing angle of 30° to 40° based on how far away the screen is from the viewer. The living room of the future will feature 120° angles or more. That’s right, you’ll be able to use your peripheral vision to see content! It’ll feel so close, you’ll want to reach out and touch it.

But that’s not where the fun ends. It’s where it begins. Today’s audio systems are spatial – a popular setup is to have 7 speakers, and one subwoofer. This is known as 7.1 surround sound. In the future, you’ll just want to keep the subwoofer – it’s the speaker that delivers the real punch, the lows, the sounds that shake things. A “Bass Shaker” will easily suffice in lieu of a subwoofer.

So what’s this crazy future? A massive improvement in viewing angle, and just one speaker? Instead of one giant screen, I predict that it will be two ultra high resolution 6 inch screens, just inches from your retina. Content will be delivered in streams thanks to the proliferation of 100 to 300 megabit internet connections, viewable in a full 360 degrees, or rendered on the device itself. Audio will be delivered directly to your ears with the lows coming from the one remaining speaker.

People all over the globe are already living in a similar future. The future where the perfect TV for the living room is no TV at all, actually. It’s a PC, driving a Virtual Reality headset, with great headphones.

You see, in the future, entertainment will no longer be about size and simulated immersion. It will be about actual immersion, and that takes no space at all.

]]>
What do computer graphics, assisted intelligence, Watson, and actual humans have in common? https://friendmichael.com/Blog/computer-graphics-assisted-intelligence-watson-actual-humans-common.html Tue, 27 Dec 2016 02:01:44 +0000 http://www.friendmichael.com/?p=278 Recently I spent some time reading, researching, and watching the state of the art in several areas of interest, notably CGI/computer animation, VR, text to voice, and facial pattern recognition. The takeaway is that all of the pieces are in place for an idea I’ve had for several years. Be advised, many of the videos included below are around a year old, so following up directly with their makers may reveal even better implementations.

What follows is a short story about each piece, then a video showing how this piece fits today. The concepts, taken as a whole, will form the foundation of an idea that could potentially change how we get things done in real life. It’s worth the read, and it’s a doozy. Grab a cup of your favorite whatever.

As always, if you like this content, please share it so others can enjoy it too. You never know who will read these things… they might be the one to take this and actually change the world.

Input: the human voice.

Human-computer interaction (HCI) has taken many forms over the years. We know the keyboard and mouse, the stylus, touch, motion, and now of course we have person to person, or social chat. One of the fastest growing segments in tech is voice recognition, and owning the home of your users. This is voice to text to [internet/cloud/action] to text to voice.

Apple has Siri, Amazon has Alexa, Microsoft has Cortana, and Google has, well, “Hey Google.” All of these services empower their owners to ask questions about seemingly anything, to change the TV channel, to order more diapers, or close the garage. They’re all early from a technology standpoint, and have varying degrees of friendliness and usability. But the core is here, and they’ll only get better with time.

All you have to do is talk, in your native language, and magic (or algorithms) happens.

Input: plain old text.

Another interface is text to text. We know these as chatbots, and you interact with them through several input channels: Twitter, SMS, Facebook Messenger, and dozens of others. Companies like Conversable are doing a great job in this space: “I’d like a large cheese pizza delivered to my house. Use my saved payment method.”

While one is initiated by voice, and the other by text, they’re just hairs apart from a technology perspective. Speech to text is nothing new, it’s been a work in progress since the earliest times in computing. Add assisted intelligence to the text output, and now we’re cooking.

Need something? Just type it, and the tech will respond.

Input: human emotion.

While voice and text are great inputs, video is an even better input. Recognizing a person has become so trivial that it can be done with a simple API call. Technology can be sure it’s “you” before you interact with it. Microsoft uses this to automatically log you into XBOX with a Kinect device.

More than detecting who you are, computers can also detect emotion in video. This used to require a room at a university, and was only done as a part of a research project. Today, we can accomplish this with just about any standard web cam, or a front facing camera on a smartphone.

We know it’s you, and how you’re feeling at this precise moment. “How can I help, Michael?”

Output: human voice.

Even the best synthesized voices still sound, well, electronic. Enter a new technology from Adobe called “vocal editing.” This tech was demoed at Adobe MAX 2016, and uses a recorded voice to allow the “photoshopping” of that voice.

It’s early, but this tech exists, and could be the voice interaction component of this idea. This demo uses a recording, just a few seconds long. Imagine what would be possible with dozens of hours of training recordings. The only input required after that is text. Text is the primary output of all of today’s Assisted Intelligence (AI) applications, like Watson for example (IBM Watson, How to build a chatbot in 6 minutes). This the next logical step:

This technology could easily be used in real time to allow “bots” to make calls to humans, and the humans would be none the wiser. They could even use your voice, if you allow it to. Bots can take voice as input (voice to text), and output text which gets sent back in the form of audio (text to voice), using this technology.

Any text input, from any source, with one remarkably consistent voice response. Maybe even your own.

Display: character-persistent, animated avatar.

When I had the original idea, the avatar the user interacted with was a simple CGI character, an obviously rendered character that would remove any interaction distraction. I wanted every touch point with the the avatar to be as simple as possible, so you’d spend all of your time focused on the task, and not distracted by its interface. This may still be the best option, but I see that gap closing quickly.

Here’s Faceshift at GDC 2015 (since acquired by Apple), but others (like Faceware, nagapi) exist in the market. Notice two completely different actors playing the same character.

Disney Research Labs has similar technology already in use.

The movie viewer never sees the actor, only the character. With the voice tech above, and a character representation, we’ve removed two persistence problems. Voice and character. Any one (or thing, including AI) can provide consistent human voice, and anyone (sex, build, race, location, whatever) can power the physical representation.

Every single time you interact with the tech, the avatar looks and sounds the same – no matter who the actor is on the other side of the animation.

Display: the human presence.

We’ve seen remarkable leaps forward in what is an age old (in computer years) tech called motion capture. Meat space actors wear a variety of sensors and gadgets to allow computers to record the motion for later use. This used to appear in only the best of the best games and movies. Just about everything you see in major releases today (from a CGI standpoint) is based on motion capture, if it involves humans.

Traditional motion capture was just a part of the process though. Scenes would be shot on a green screen, then through (sometimes) years of perfection, a final movie would grace theater screens or appear in an epic game release.

At Siggraph 2016, Epic Games (makers of the Unreal Engine) featured a new process that amounts to “just in time acting.” Instead of capturing the actors, then using that motion later in scenes, Epic used a process that rendered results in real-time. It’s mind blowing – using the camera in the game engine to record a motion captured actor-in game.

Display: enhanced human presence.

The problem with CGI and humans is something called the uncanny valley: “a computer-generated figure or humanoid robot bearing a near-identical resemblance to a human being arouses a sense of unease or revulsion in the person viewing it.”

Explained in Texan: “Well, it might be close, but it ain’t quite right.” It may be getting close enough.

There are several ways humans protect themselves from attack. One of the simplest is recognizing fellow humans. Sometimes they may want to harm us, other times hug us. But either way, we’re really, really good at recognizing deception.

Until now. This piece was created with Lightwave, Sculptris and Krita, and composited with Davinci Resolve Lite – in 2014 (two years ago).

In 2015, a video was released by USC ICT Graphics Laboratory showing how advanced skin rendering techniques can be deceptively good. Another video by Disney’s Research Hub shows a remarkable technology for rendering eyes. And earlier this year, Nvidia released a new demo of Ira.

Display: enhanced facial reenactment method.

An advancement I didn’t expect to see so soon takes standard digital video footage (a newscast or a YouTube video, for example) and allows an actor, using a standard webcam, to transfer expressions to the target actor. It’s a clever technology.

If the technology works as well as it appears to with simple, low resolution sources, imagine what could be done with professional actors creating 48 hours of source video. That could in turn be targeted by digital actors using a combination of the above video technologies. The interface to this technology would be a recorded human actor with transferred facial expressions from a digital actor, all rendered in real time.

Bringing it all together.

Inputs: voice, text, video, and emotion.

Processing: assisted intelligence, APIs: input to voice.

Outputs: text, human voice, and/or photo realistic CGI/animated characters/human.

But wait. There’s one more thing.

This is great for an AI powered personal assistant. Marrying all of this tech together into one simple and cohesive interface would make everything else feel amateur.

But what if we could add an actual person (or 4) to the mix. Real human beings available 24/7 (in shifts) to note your account, or to call your next appointment to let them know you’ve arrived early? What if your assistant could call the local transit agency, or cancel a flight using voice in whatever the local language happens to be?

All of the technologies mentioned above create and intentional gap between the inputs and outputs, allowing any number of “actors” in between. If a task is suitable for a bot to handle, then a bot should handle it, and reply. If a human is required, the user should never know a human stepped in to take control of the interaction. The voice, text, and display will be 100% the same to protect the experience.

Think about it: any language in (from either side), and your specific language and video representation out. If there were a maximum of four people that knew you more intimately than your family, but you knew you’d never, ever have to think about this problem again, would you do it?

In summary, I’ve outlined a highly personalized virtual assistant, with 100% uptime and omnipresence across every device and interface you have (including VR, but let’s save that for another time).

What you won’t know is whether you’re talking to a human, or machine.

If you liked this, please share it. Your friends will dig it too. Thank you!

]]>