Game Changers Archives - Friend Michael - One Big Experiment https://friendmichael.com/Categories/game-changers Father, husband, geek, entrepreneur, creator. Thank you for being here. Sun, 10 Jun 2018 02:20:31 +0000 en-US hourly 1 https://wordpress.org/?v=6.7.1 120796783 How consumers are about to revolutionize casual gaming. Again. https://friendmichael.com/Blog/how-consumers-are-about-to-revolutionize-casual-gaming-again.html https://friendmichael.com/Blog/how-consumers-are-about-to-revolutionize-casual-gaming-again.html#comments Sun, 10 Jun 2018 02:09:26 +0000 http://www.friendmichael.com/?p=468 Something finally hit me like a ton of bricks. We’ve been here before.

How many of you remember what the gaming ecosystem looked like in 2007? There were consoles, PC gaming, Macs were practically a no-show except for “light” games, and mobile gaming was ports of 8 bit gaming engines and evolved versions of snake.

No one cared about playing games on the phone, that’s not what they were for, they were for email (Windows CE, Blackberry), messaging, and phone calls. Nokia’s N-Gage platform notwithstanding 🙂

Fast forward to today, iOS and Android (phones) own the market that was created when the iPhone was released… that market is called “Casual Games.” There’s been no shortage of debate about how powerful the phones are, and how well they can play games, but without a doubt, nearly everyone plays games on their phones.

These games aren’t typically using the latest whiz-bang graphics, or VR, or or even team play. They’re nothing like what a “gamer” would play. They’re far to uninteresting. The gamer wants wicked refresh rates, absurd FPS, and the latest and greatest GPUs and CPUs with as much memory as possible. Add a VR headset and the requirements increase further.

The casual gamer wants to be able to enjoy themselves, play puzzle games, grow farms, checkers, peer to peer backgammon, and so on. Things that run perfectly on their mobile devices.

What’s happening today is a very similar revolution. Oculus released the Oculus go, powered by what amounts to a mobile phone’s core. They’ve stripped the non-essential software and hardware and put it in the market.

What’s different this time? The Oculus Go leverages a well tuned app store ecosystem, developed with their partners at Samsung while building Gear VR. Why does the app store matter? Says Greg Joswiak, Apple vice president of iOS, iPad and iPhone marketing, in a Rolling Stone story called “Apple: How iPhone Gaming Revolutionized Video Games”

“We thought maybe we’d get 50 apps to start, but on the first day we had 500, and we thought that was an omen. But I’d be lying if I said we thought it would be as revolutionary as it would become. It’s changed the world. It’s changed the way software is written and distributed. It’s changed the gaming industry.”

Simply? Consumers want an easy button. The Oculus Go is incredibly simple, and easy. The Oculus Go is not for the “gamers” among us. It’s a very simple and elegant entry into the consumer VR space. It provides exactly the same experience that the current casual games do on iOS and Android, but in VR. You can play with friends, watch movies and TV, and of course you can do most of it in real time with friends.

Here’s a quote from a friend of mine, and new Go convert/evangelist Elie Finegold: “Got another one today for my wife so we can hang together while I’m traveling.” This comes from our first experience in Oculus Rooms. He and I spent the better part of an hour just chatting and catching up. He was so taken by it, well, you see what happened.

We’re on the edge of something great here. I hope you’ll follow along for more as it unfolds.

Previous Go stories:
New to the Oculus Go? Here are 10 apps to get you started.
Wireless consumer VR: slip it on and Go. Anywhere.

]]>
https://friendmichael.com/Blog/how-consumers-are-about-to-revolutionize-casual-gaming-again.html/feed 1 468
Wireless consumer VR: slip it on and Go. Anywhere. https://friendmichael.com/Blog/wireless-consumer-vr-slip-it-on-and-go-anywhere.html https://friendmichael.com/Blog/wireless-consumer-vr-slip-it-on-and-go-anywhere.html#respond Tue, 29 May 2018 13:19:13 +0000 http://www.friendmichael.com/?p=442 It’s been several days now with the Oculus Go. I find that I’m spending time in it… many hours per day. It’s quite a device for a $199 entry point. Add a decent pair of headphones and the value is pretty unreal. Keep in mind, this is $199, –> all in. <– No PC required, no phone, nothing extra. That’s it.

I live in 350 sq ft. with my wife, daughter, and two dogs. It’s nice to be able to zone out and be in my own space without having to be tethered to the PC and the Samsung HMD Odyssey. I’ve even used it outside in a camping chair.

My current usage patterns suggest that it’s a replacement for using Facebook, Instagram, Reddit, Flipboard, and so on on my iPhone X. I set up a couple of web based Google Mail accounts too. It’s remarkably usable for these things. I have bookmarks set for all of them, so they’re just a click away.

As far as VR experiences, there are several things I keep going back to. Wonder Glade has several mini games. For some reason, I really enjoy the basketball and mini-golf.

Proton Pulse is a great breakout/bricks type game apparently made for Gear VR as it uses head motions, not the controller. I expect that’ll be updated, but it’s well worth the $2.99.

A couple of other interesting things: Mondly (interactive language practice) and MelodyVR (360° live concerts with multiple camera positions).

I also love that Altspace is here. That brings the promise of social VR to an untethered, inexpensive headset. I haven’t tested all of the games, but being able to play with others, cross platform, is intriguing.

I haven’t test the party feature yet. I have a few friends with Go, but if you’re ever online at the same time I am, I’d be happy to give it a shot.

Of course the consumption experiences are great too. Hulu, Netflix, Amaze, Gala… they all do exactly what you expect.

There are some things that would make the experience better, but they’re certainly not show stoppers. Copy and pasting text, a “right click” somehow in the browser, pairing of other Bluetooth devices (keyboard, mouse, headphones), and a way to view a computer’s screen interactively. Think Bigscreen, but two way.

Imagine setting up a virtual server at Digital Ocean with Ubuntu, and being able to control that machine from your Go, anywhere with WIFI. I’d love to use this for work, but like with VR in general, this is still a wide open area for devs to tackle.

More soon.

You can pick one up at Best Buy, or follow this link to Amazon. It is an affiliate link, so if you make a purchase there, Heather and I will receive a small percentage of the sale.

]]>
https://friendmichael.com/Blog/wireless-consumer-vr-slip-it-on-and-go-anywhere.html/feed 0 442
Redefining the Driver’s License https://friendmichael.com/Blog/redefining-the-drivers-license.html https://friendmichael.com/Blog/redefining-the-drivers-license.html#respond Mon, 07 May 2018 17:07:20 +0000 http://www.friendmichael.com/?p=434 This simple idea applies to literally everyone with a driver’s license. The keyword here is “license.” And yes, it involves you.

A license is granted once the applicant, typically a spry and eager teenager, passes a written test and the state sponsored or licensed driving test. Permits are issued under certain circumstances, but require a fully licensed driver to be present at all times while the permitted driver is behind the wheel. There is an early age requirement for both the license and the permit, and both require a very basic understanding of the laws of the road and basic vehicle operation.

Likewise, a license can be revoked by the issuer for many reasons… DUI, medical issues, too many “points” for infractions, etc.

It is a license to drive. There is no right to drive. It’s a privilege, earned by proving your understanding of the law and basic vehicle operation.

I have a simple proposal that would revolutionize the safety of drivers and pedestrians, and would lead to a guaranteed increases of funding for public roads.

Ready for it?

This is an idea so simple, it’s unbelievable.

I’m a software developer by trade with 25 years of experience behind me. One of the items I have to deal with on a regular basis, and something everyone reading this is familiar with is the “Software License Agreement.” License agreements are the little modal boxes that appear when you open software for the first time. Or when boot your new phone for the first time. Your computer, tablet, TV… they’are all bound by license agreements.

My proposal is to tie a similar license agreement to the driver’s license. This license agreement could be updated whenever necessary to incorporate new technologies related to driving, tolling, and public safety. It would require that each licensee have a correct and current method of contact tied to their license (as is already required by law).

Every update to the physical license requires a new agreement. Get a new license? Change your address? Renew your license? Lose it? You must agree to the new terms. This would ensure that every US citizen would have to agree to the terms within the next 5 to 10 years.

Typically it’s incumbent upon the user to check the license agreement for changes periodically, but the grantor also sends update notifications via email, text message, or snail mail. In this case, media would most certainly cover changes. Each state office can issue updates via social media channels, or via a simple newsletter subscription.

Why on earth would I propose such a preposterous scenario? It’s simple. While most people are good honest law abiding citizens, there is an ever growing group of individuals that would follow the law to the T with a little more encouragement. States and municipalities have tried various versions of automating the law – red light cameras, speed traps (vans with speed sensitive cameras and measurement), and more.

If your state sees it fit to implement automated methods to ensure public safety, that could easily be incorporated into the agreement.

For example, all toll roads could become speed monitors. They know when you enter and when you leave each entrance and exit. This is a math problem. They already have the vehicle’s license plate, so tying this back to the licensed vehicle and its owner is simple. Other automated means of speed patrol could be implemented – autonomous drones, sign affixed apparatus, etc. If given a range of tolerance (+10%), this would be highly effective at deterring speeding.

Another item in the agreement is red light cameras, and other automated traffic safety items. Driving through a crosswalk when the pedestrian present notification lights are flashing, school zone infractions, passing on the right, trucks in the left lane on freeways, and the list goes on and on. All of these can be automated, and should be.

Imagine going through a toll booth, then hearing your favorite navigation app tell you that you’ve just earned a point on your license and a $75 bill from the state because your average speed between booths was more than 10% of the stated speed limit. That $75 would be charged directly to your toll bill. It’s simple.

This could be implemented in no time, and with a relatively small budget (that would pay for itself quickly), with a simple state issued Mandatory Driver’s License Agreement.

A state mandated license agreement could be updated when new technologies enter the market. For example, what needs to change when autonomous vehicles enter the retail landscape?

If you oppose this idea, I encourage you to take a step back and think about why. It will always come down to the law. Rules are made to be broken, laws are not. If you speed (like I do, mind you) then you’re knowingly and intentionally breaking the law. Any aversion to automated testing is a personal plea to allow you to break the law. I know for a fact that I’d speed less (I’ve already been far more aware, and try to stay under 10%).

Pedestrian safety is an issue that needs to be addressed, and current methods are falling short. We have the technology to solve this. And we should.

]]>
https://friendmichael.com/Blog/redefining-the-drivers-license.html/feed 0 434
Free idea: decentralized avatar repository for Social VR https://friendmichael.com/Blog/free-idea-decentralized-avatar-repository-social-vr.html https://friendmichael.com/Blog/free-idea-decentralized-avatar-repository-social-vr.html#respond Mon, 25 Dec 2017 17:10:42 +0000 http://www.friendmichael.com/?p=404 In 2004, Tom Preston Werner created something huge. His idea was cemented in history as Matt Mullenweg and Automattic acquired the service in 2007. Matt and his team developed WordPress, and integrated Tom’s creation into the code base. What what is creation? Gravatar.

The idea was simple really – a central repository for your digital persona. Create an account with your email address, upload a photo (or photos), and any developer that uses the Gravatar APIs would automatically have access to your data to fill in profile information. It meant you could update and maintain your profile in one place and that data would be updated all over. It’s a one to many internet profile.

The time is now for a multi-dimensional version of this application. Here are a few ideas:

  • Open, and decentralized using an IPFS style storage engine
  • 3D avatars – as many as the user can create, but only one active at a time
  • Support for the major model formats (3ds, max, c4d, maya, blend, obj, fbx)
  • All avatars would have a well documented skeletal API for movement controls when used in 3rd party systems
  • Tight integration with OpenSVR – the Social VR API
  • Character inventory storage and retrieval – think cloud storage for the “bag of holding” with pouches for each application using the APIs.
  • Toggles for things like user name display, microphone control, bubbles, and content rating controls
  • Enable API based import – Sketchfab -> OpenAvatar with one click.

This concept would allow developers to spend less time building avatar systems, allowing them to focus on the thing that matters most – the experience. Users benefit by having the same avatar everywhere that matters. If you want to play Robo Recall as a fairy princess from wherever… well this makes that possible.

Being able to recognize other players by their avatar across all Social VR experiences would make the experience feel closer to reality. It might seem strange to see a photo realistic avatar in a cartoon world (like Rec Room), but that’s what needs to happen.

What do you think? Leave a comment below?

Here are more stories in the VR category.

]]>
https://friendmichael.com/Blog/free-idea-decentralized-avatar-repository-social-vr.html/feed 0 404
Free idea: An open Social VR API https://friendmichael.com/Blog/free-idea-open-social-vr-api.html https://friendmichael.com/Blog/free-idea-open-social-vr-api.html#comments Mon, 25 Dec 2017 03:32:16 +0000 http://www.friendmichael.com/?p=394 Before we get to Social VR, let’s recap. It’s 2017, almost 2018, and Virtual Reality systems are selling better than ever. The variety of VR hardware is stunning, with prices ranging from a simple $10 “cardboard” system to multi-thousand dollar haptic VR rigs with 360 degree rotation.

The individual titles available are getting more immersive and users are spending hours, hundreds of hours, with HMDs engaged. Game titles like VRChat, Rec Room, and OrbusVR are taking off. Their common theme? They’re Social VR.

Each of the major players in the space have some form of home (or house) as their default location when you dawn the gear. All of them act as launchers for other experiences and applications. Steam VR launches and interacts with the Steam platform, Oculus Home/Dash interact with the Oculus ecosystem, and Microsoft and Sony have their own. Oculus Dash 2 is a step in the right direction, and even has some elements of Ready Player One. But what happens with Vive or Windows Mixed Reality users?

Facebook took a remarkable step last week by opening the once Rift exclusive Facebook Spaces to Vive users. Of course anyone could use it with Revive, but this is official support. It’s a recognition that the combined market is a much larger opportunity. But I digress.

One thing they all have in common is that these core launchers are not social in any way. I can’t invite you to hang out in my Cliff House, then jump into a game of Rec Room together and return the house upon exit. None of them work this way. Why? More importantly, why should they be?

Let’s liberate Social VR and make it open source and cross platform. Not just OS, but dev environment too. Maybe OpenSVR?

What if we could build an open API for Unity, Unreal, and WebXR that remembers the state of a user’s VR experience? As the user exits, this object would collect data about that specific point in time then save a 360 degree “live” image (like Apple’s iOS) of the exit point. It could track play/use over time and dozens of data points that could come in handy.

The 360 degree image captured at the time of exit could wrap the inner sphere of a teleportation portal. We’ve seen a form of this with 360 degree videos in Facebook Spaces. To play the game again, tap the sphere in High Fidelity or your preferred open Social VR platform. To play with friends, have them tap the same sphere, anywhere in the metaverse.

This sounds way harder than it is. This is a layer that gets built into the developer’s tools of choice. Similar things exist for iOS (Game Center) and Android, and Microsoft has the XBOX platform. What I’m proposing is 100% open source.

As we move toward work in VR, shared experiences with friends and colleagues will be transformative to human relationships. This is an important step.

What are your thoughts? Leave a comment below!

]]>
https://friendmichael.com/Blog/free-idea-open-social-vr-api.html/feed 2 394
The home theater of the future is smaller and faster https://friendmichael.com/Blog/home-theater-future-smaller-faster.html https://friendmichael.com/Blog/home-theater-future-smaller-faster.html#respond Sun, 29 Jan 2017 23:42:52 +0000 http://www.friendmichael.com/?p=288 Drop by any consumer electronics store, and see what TVs are selling best. According to the CEA (Consumer Electronics Association) “Sales of super-sized TVs are up 50 percent in the past year, as prices on behemoth flat panels have dropped.”

The 65 inch range is great, and, in fact is what we have in the living room. Ours is 8 years old, plasma, and weighs about as much as a full barge on the Mississippi, but I digress. The size of the TV is a great match for the room.

Today’s consumers demand larger, higher resolution screens, to replicate the 100+ year old movie going experience. Thinner, lighter, and with internet connectivity and apps. You want all of the latest apps: Netflix, Amazon Prime, Hulu, and it would be idea to have a complete operating system for the ultimate in expandability. Content is king.

But a change is coming. I’m not talking 3DTV, or the gimmicky curved screen tech of the past few years. Those are just micro iterations on the same old technology. I’m talking about something as big as the jump from VHS to 4k just in time streaming through Netflix, Hulu, and YouTube.

What if the best screen for the single person’s living room of the very, very near future was just 10% of the size of today’s best selling TVs?

Wait. Ten percent? 6.5 inches?

Let me introduce the living room of the future. Today’s TVs have a viewing angle of 30° to 40° based on how far away the screen is from the viewer. The living room of the future will feature 120° angles or more. That’s right, you’ll be able to use your peripheral vision to see content! It’ll feel so close, you’ll want to reach out and touch it.

But that’s not where the fun ends. It’s where it begins. Today’s audio systems are spatial – a popular setup is to have 7 speakers, and one subwoofer. This is known as 7.1 surround sound. In the future, you’ll just want to keep the subwoofer – it’s the speaker that delivers the real punch, the lows, the sounds that shake things. A “Bass Shaker” will easily suffice in lieu of a subwoofer.

So what’s this crazy future? A massive improvement in viewing angle, and just one speaker? Instead of one giant screen, I predict that it will be two ultra high resolution 6 inch screens, just inches from your retina. Content will be delivered in streams thanks to the proliferation of 100 to 300 megabit internet connections, viewable in a full 360 degrees, or rendered on the device itself. Audio will be delivered directly to your ears with the lows coming from the one remaining speaker.

People all over the globe are already living in a similar future. The future where the perfect TV for the living room is no TV at all, actually. It’s a PC, driving a Virtual Reality headset, with great headphones.

You see, in the future, entertainment will no longer be about size and simulated immersion. It will be about actual immersion, and that takes no space at all.

]]>
https://friendmichael.com/Blog/home-theater-future-smaller-faster.html/feed 0 288
What do computer graphics, assisted intelligence, Watson, and actual humans have in common? https://friendmichael.com/Blog/computer-graphics-assisted-intelligence-watson-actual-humans-common.html https://friendmichael.com/Blog/computer-graphics-assisted-intelligence-watson-actual-humans-common.html#respond Tue, 27 Dec 2016 02:01:44 +0000 http://www.friendmichael.com/?p=278 Recently I spent some time reading, researching, and watching the state of the art in several areas of interest, notably CGI/computer animation, VR, text to voice, and facial pattern recognition. The takeaway is that all of the pieces are in place for an idea I’ve had for several years. Be advised, many of the videos included below are around a year old, so following up directly with their makers may reveal even better implementations.

What follows is a short story about each piece, then a video showing how this piece fits today. The concepts, taken as a whole, will form the foundation of an idea that could potentially change how we get things done in real life. It’s worth the read, and it’s a doozy. Grab a cup of your favorite whatever.

As always, if you like this content, please share it so others can enjoy it too. You never know who will read these things… they might be the one to take this and actually change the world.

Input: the human voice.

Human-computer interaction (HCI) has taken many forms over the years. We know the keyboard and mouse, the stylus, touch, motion, and now of course we have person to person, or social chat. One of the fastest growing segments in tech is voice recognition, and owning the home of your users. This is voice to text to [internet/cloud/action] to text to voice.

Apple has Siri, Amazon has Alexa, Microsoft has Cortana, and Google has, well, “Hey Google.” All of these services empower their owners to ask questions about seemingly anything, to change the TV channel, to order more diapers, or close the garage. They’re all early from a technology standpoint, and have varying degrees of friendliness and usability. But the core is here, and they’ll only get better with time.

All you have to do is talk, in your native language, and magic (or algorithms) happens.

Input: plain old text.

Another interface is text to text. We know these as chatbots, and you interact with them through several input channels: Twitter, SMS, Facebook Messenger, and dozens of others. Companies like Conversable are doing a great job in this space: “I’d like a large cheese pizza delivered to my house. Use my saved payment method.”

While one is initiated by voice, and the other by text, they’re just hairs apart from a technology perspective. Speech to text is nothing new, it’s been a work in progress since the earliest times in computing. Add assisted intelligence to the text output, and now we’re cooking.

Need something? Just type it, and the tech will respond.

Input: human emotion.

While voice and text are great inputs, video is an even better input. Recognizing a person has become so trivial that it can be done with a simple API call. Technology can be sure it’s “you” before you interact with it. Microsoft uses this to automatically log you into XBOX with a Kinect device.

More than detecting who you are, computers can also detect emotion in video. This used to require a room at a university, and was only done as a part of a research project. Today, we can accomplish this with just about any standard web cam, or a front facing camera on a smartphone.

We know it’s you, and how you’re feeling at this precise moment. “How can I help, Michael?”

Output: human voice.

Even the best synthesized voices still sound, well, electronic. Enter a new technology from Adobe called “vocal editing.” This tech was demoed at Adobe MAX 2016, and uses a recorded voice to allow the “photoshopping” of that voice.

It’s early, but this tech exists, and could be the voice interaction component of this idea. This demo uses a recording, just a few seconds long. Imagine what would be possible with dozens of hours of training recordings. The only input required after that is text. Text is the primary output of all of today’s Assisted Intelligence (AI) applications, like Watson for example (IBM Watson, How to build a chatbot in 6 minutes). This the next logical step:

This technology could easily be used in real time to allow “bots” to make calls to humans, and the humans would be none the wiser. They could even use your voice, if you allow it to. Bots can take voice as input (voice to text), and output text which gets sent back in the form of audio (text to voice), using this technology.

Any text input, from any source, with one remarkably consistent voice response. Maybe even your own.

Display: character-persistent, animated avatar.

When I had the original idea, the avatar the user interacted with was a simple CGI character, an obviously rendered character that would remove any interaction distraction. I wanted every touch point with the the avatar to be as simple as possible, so you’d spend all of your time focused on the task, and not distracted by its interface. This may still be the best option, but I see that gap closing quickly.

Here’s Faceshift at GDC 2015 (since acquired by Apple), but others (like Faceware, nagapi) exist in the market. Notice two completely different actors playing the same character.

Disney Research Labs has similar technology already in use.

The movie viewer never sees the actor, only the character. With the voice tech above, and a character representation, we’ve removed two persistence problems. Voice and character. Any one (or thing, including AI) can provide consistent human voice, and anyone (sex, build, race, location, whatever) can power the physical representation.

Every single time you interact with the tech, the avatar looks and sounds the same – no matter who the actor is on the other side of the animation.

Display: the human presence.

We’ve seen remarkable leaps forward in what is an age old (in computer years) tech called motion capture. Meat space actors wear a variety of sensors and gadgets to allow computers to record the motion for later use. This used to appear in only the best of the best games and movies. Just about everything you see in major releases today (from a CGI standpoint) is based on motion capture, if it involves humans.

Traditional motion capture was just a part of the process though. Scenes would be shot on a green screen, then through (sometimes) years of perfection, a final movie would grace theater screens or appear in an epic game release.

At Siggraph 2016, Epic Games (makers of the Unreal Engine) featured a new process that amounts to “just in time acting.” Instead of capturing the actors, then using that motion later in scenes, Epic used a process that rendered results in real-time. It’s mind blowing – using the camera in the game engine to record a motion captured actor-in game.

Display: enhanced human presence.

The problem with CGI and humans is something called the uncanny valley: “a computer-generated figure or humanoid robot bearing a near-identical resemblance to a human being arouses a sense of unease or revulsion in the person viewing it.”

Explained in Texan: “Well, it might be close, but it ain’t quite right.” It may be getting close enough.

There are several ways humans protect themselves from attack. One of the simplest is recognizing fellow humans. Sometimes they may want to harm us, other times hug us. But either way, we’re really, really good at recognizing deception.

Until now. This piece was created with Lightwave, Sculptris and Krita, and composited with Davinci Resolve Lite – in 2014 (two years ago).

In 2015, a video was released by USC ICT Graphics Laboratory showing how advanced skin rendering techniques can be deceptively good. Another video by Disney’s Research Hub shows a remarkable technology for rendering eyes. And earlier this year, Nvidia released a new demo of Ira.

Display: enhanced facial reenactment method.

An advancement I didn’t expect to see so soon takes standard digital video footage (a newscast or a YouTube video, for example) and allows an actor, using a standard webcam, to transfer expressions to the target actor. It’s a clever technology.

If the technology works as well as it appears to with simple, low resolution sources, imagine what could be done with professional actors creating 48 hours of source video. That could in turn be targeted by digital actors using a combination of the above video technologies. The interface to this technology would be a recorded human actor with transferred facial expressions from a digital actor, all rendered in real time.

Bringing it all together.

Inputs: voice, text, video, and emotion.

Processing: assisted intelligence, APIs: input to voice.

Outputs: text, human voice, and/or photo realistic CGI/animated characters/human.

But wait. There’s one more thing.

This is great for an AI powered personal assistant. Marrying all of this tech together into one simple and cohesive interface would make everything else feel amateur.

But what if we could add an actual person (or 4) to the mix. Real human beings available 24/7 (in shifts) to note your account, or to call your next appointment to let them know you’ve arrived early? What if your assistant could call the local transit agency, or cancel a flight using voice in whatever the local language happens to be?

All of the technologies mentioned above create and intentional gap between the inputs and outputs, allowing any number of “actors” in between. If a task is suitable for a bot to handle, then a bot should handle it, and reply. If a human is required, the user should never know a human stepped in to take control of the interaction. The voice, text, and display will be 100% the same to protect the experience.

Think about it: any language in (from either side), and your specific language and video representation out. If there were a maximum of four people that knew you more intimately than your family, but you knew you’d never, ever have to think about this problem again, would you do it?

In summary, I’ve outlined a highly personalized virtual assistant, with 100% uptime and omnipresence across every device and interface you have (including VR, but let’s save that for another time).

What you won’t know is whether you’re talking to a human, or machine.

If you liked this, please share it. Your friends will dig it too. Thank you!

]]>
https://friendmichael.com/Blog/computer-graphics-assisted-intelligence-watson-actual-humans-common.html/feed 0 278
What do you do with a 50Mbps internet connection at home? https://friendmichael.com/Blog/what-do-you-do-with-a-50mbps-internet-connection-at-home.html https://friendmichael.com/Blog/what-do-you-do-with-a-50mbps-internet-connection-at-home.html#respond Sat, 16 Jan 2010 23:52:15 +0000 http://www.friendmichael.com/?p=24 One evening, while on the topic of high speed internet, a friend of mine told me that he'd just upgraded his cable modem and was now getting even faster internet. Comcast, our local cable internet provider had recently upgraded the network in Colorado to DOCSIS 3.0. Among other things, DOCSIS 3.0 in Colorado means internet speeds – from the usual 4Mbps service, to insane 100Mbps+ speeds. If faster internet is available, you can bet I'm going to want it. (Note, I said want, not need).

I dropped by Best Buy and bought the same Motorola SURFboard SB6120 Rich had. It was $85 before tax, but frees me from the $5/mo cable modem rental fee. Over time, I expect this modem to pay for itself (and eventually save us a little cash).

Installation couldn't have been easier. Unplug the old modem, plug the new one in, and call Comcast (1-800-COMCAST) to activate it. All I had to do was tell them that I bought my own modem and they did the rest. At one point I joked with the agent (as the firmware was being updated) that “this is where you install that secret back door for the NSA, right?” and guess what she said? She said “Yes” then snickered a few seconds later. I wish I'd recorded that.

After that call (and agreeing to pay more per month of course), my speed tests went to a consistent 35Mbps down. I was able to achieve that consistently as well (I'll share how further down). A few days later, friends on the Boulder Comcast DOCSIS 3.0 network mentioned that their upgrades were achieving 50Mbps speeds. Again, if there's faster, I want it.

So again I called Comcast, and asked if the 50Mbps speeds were available in my area. The short answer was yes, but there was a problem: my modem. The current firmware for theMotorola SURFboard SB6120 has issues going over 35Mbps with Comcast. The support rep says, as he hears the deflation of my spirits, “Will you hold for a second? I need to make a call.” I was in luck – the call he made was to a team that specializes in baking firmware – and the answer he got was “We can build something for the SB6120 that will work until it's released publicly.”

Now I'm at an official 50Mbps. While connected directly to the modem in the basement, I tested out at a whopping 63Mbps. The extra 13Mbps is attributed to what's called “burst mode” on Comcast. It allows you to get incredibly fast (more than you pay for) speed for a short period of time, then as usage progresses you're dropped back to the speed you've paid for.

A series of tests later, I determined that my wireless router was now a speed trap. I wasn't getting anywhere near the speeds on the wired (or wireless) network that I was able to achieve while plugged directly in to the modem. After reading reviews, I decided to upgrade my wireless router to an Apple AirPort Extreme (Gigabit). That did the trick, and now everywhere in the house (wired or not) we get 50Mbps+ internet (all of our “phone jacks” are actually RJ45, and plugged in to a gigabit switch).

Here's where this post relates to it's title. I was able to achieve greater than 50Mbps in testing by doing things that would never happen in the real world. For example, downloading 15 HD trailers simultaneously from Yahoo! HD Trailers, downloading three 500MB software updates from Apple, and streaming a Netflix movie.

But who does that?

My question to you, dear reader, is what kinds of things are out there that can actually utilize a low latency 50Mbps internet connection at home? Have you encountered, or do you know of real world services (legal only please) that are capable of stressing this connection?

Links:

Motorola SURFboard eXtreme Broadband Cable Modem
Apple AirPort Extreme Wireless-N Wireless Base Station
Yahoo! HD Trailers
Netflix

]]>
https://friendmichael.com/Blog/what-do-you-do-with-a-50mbps-internet-connection-at-home.html/feed 0 24
An iPhone killer I want. Apple, listen up. https://friendmichael.com/Blog/an-iphone-killer-i-want-apple-listen-up.html https://friendmichael.com/Blog/an-iphone-killer-i-want-apple-listen-up.html#respond Mon, 12 Jan 2009 03:26:56 +0000 http://www.friendmichael.com/?p=45 Yes, I'm a die-hard iPhone user. I'll try to sell it to anyone that even pretends to be interested in the device – some might even say I go overboard. I even stood in line (OK, camped out overnight) for both iPhones. I was second in line for the first, and third in line for the 3G at the Boulder, Colorado, Apple Store.

I find a lot of value in it and if people knew about the wonderful things it can do, everyone would want one. The app store is amazing, I've downloaded nearly 100 applications, and a lot of them are paid applications. The utility of these apps varies, but it is clear that Apple has set the bar for the industry. Not just in the iPhone's user interface, but in the App Store's purchasing process, and the overall iPhone experience. Each firmware release just makes things better and better.

As great as the iPhone is, it's missing a few features. A big one is copy and paste. I don't have a lot of use for it, but it sure stinks when I want it and it isn't there. The SMS alerts are modal and take over the screen. If you've ever been on a phone call and received an SMS you'll know what I mean. You have to acknowledge the SMS before you can end the call. You can only run one application at a time, and there are no background processes. If you get a new Tweet for example, you don't know until you open your Twitter application. If you get new messages in Facebook, you have to go to the Facebook app to find out. I'd love to see what developers could do if their apps were allowed to run processes in the background.

A term was coined shortly after the iPhone was introduced: “the iPhone Killer.” As it has been loosely defined, it is any phone with a “touch” interface, or even more simply a touch screen. The device has to be internet connected and do lots of neat tricks like the iPhone. Several companies have tried to gain mind share with their offerings: Blackberry Storm, T-Mobile G1, Samsung Instinct, and host others. In my experience, these phones all feel rushed to market, as if they were conceived just after Steve Jobs announced the original iPhone. The T-Mobile G1 has gained a lot of attention by being completely open source – any developer can write applications for it, but it has (to date) failed to live up to its expectations.

At CES last week a new phone was introduced. As soon as I heard it was from Palm, Inc. I was quick to brush it off as another “iPhone Killer.” A device that the company has put all of its heart and soul in to just be let down in sales and market adoption. The iPhone is the 600lb gorilla, after all. Palm, Inc. has lost everything it gained with the Palm OS, and has even been caught selling Windows Mobile instead. A has been company, looking for one final volley in the modern phone world.

And they knocked it out of the park. World, meet the Palm Pre. This is the first “iPhone Killer” that actually stands a chance at gaining a sustainable piece of the market. Apple might just be caught sweating about this one. Why? Because I (see above for why this is remarkable) want one. And I want it now.

This post isn't about the features and all of the things the Pre can do. For that I'll leave a link at the bottom to they introduction. This post is about me letting the world know that technology is evolving, and Apple is no longer the only player in the wicked smart phone market. Have a look at the video, and you'll be as amazed as I was. Pay particular attention the the charging mechanism and the sync process.

Here's the video: Palm Pre @CES. After you watch that, have a look at the coverage at Engadget. Read the user comments for extra credit: Palm Pre in-depth impressions, video, and huge hands-on gallery

]]>
https://friendmichael.com/Blog/an-iphone-killer-i-want-apple-listen-up.html/feed 0 45
iPhone observations https://friendmichael.com/Blog/iphone-observations.html https://friendmichael.com/Blog/iphone-observations.html#respond Wed, 18 Jul 2007 20:01:45 +0000 http://www.friendmichael.com/?p=90 Since the iPhone's release, it seems that everyone has an opinion on whether or not the features are worth the money, or how it will or won't impact the world. It has this, and doesn't have that, etc. Here are a couple of my observations.

First, I know 25 people with the iPhone – personally. I have never seen such adoption of any consumer electronics device. Say what you want about the feature set, but the numbers don't lie. The interesting part of this is that nearly all of them have (err, had) and iPod too.

If you doubt the draw of the iPhone, go to a local Apple store and watch not only which device draws attention, but notice the sheer volume of traffic in the store. People of all ages are there, from the dreamer in grade school, to the mature senior adult.

Second, I've noticed that iPhone users are less likely to use their computers after business hours. I noticed on Sunday, for example, that after my morning email/news checking, that I didn't open my laptop again until Monday morning coffee. If you know me, you know that this is very, very unusual.

It took a while for that to sink in, but once it did, I pinged a few other owners. The story is the same. The experience with the phone is so delightful, that it is actually enabling people to get away from their computers.

I bet Apple didn't anticipate that one.

]]>
https://friendmichael.com/Blog/iphone-observations.html/feed 0 90