A11yNYC July 11 2017 https://youtu.be/wU12OJQzvGc >> Hello, everyone. Welcome to Accessibility NYC. If you're here tonight to join us, I appreciate you coming. For Thomas Logan, who's gonna speak about virtual reality and accessibility. Before we get started... Oh, sorry. Blocking the captions. I wanted to say a couple of thank yous. Thank you to Thoughtbot for hosting us tonight. This is their space. They've been graciously donating their space for a while now. Also to SSBBART group who's a great supporter of our work. And Mirabai Knight, who's captioning through White Coat Captioning. Also Joly from the Internet Society of New York, back there, doing the live stream. So with that, I'll give it up to Thomas. (applause) THOMAS: Thank you! So virtual and augmented reality accessibility... To quote Donald Trump, Jr, "I love it!" I've got to be topical. So we're gonna be basically -- everything at bit.ly/eevrar. It's basically a code pin of all the citations of the illustrations that I'm using in this presentation. I'm quite new to this field. I'm an idea person, I'm a demonstration person, so basically everything I'm showing you in this presentation comes from a lot of great work from people who have been working in this space. And I guess a key point for me in this presentation today -- I'm gonna start a little bit with my own history -- but a key point of my presentation is that, you know, VR and AR are happening now. You know? There's tons of companies working in this space. I don't see any reason for accessibility standards, accessibility guidelines, to be waiting. There really shouldn't be a delay in those experiences being accessible. The argument or the idea that these are new technologies -- I've been working in this field for 15 years. I feel like most of the concepts stay the same. So my own spin on what I'm gonna be showing you today is that looking at what's currently happening in this space and looking at what has been done with technology in the past, I think we should all be advocating for faster requirements and faster standards for people that build technologies in this space. Because honestly, I don't really feel like there's any excuse not to be doing that now. Apply existing accessibility guidelines to VR/AR. I'm mostly going to be tying the demonstrations I do into the geeky nomenclature into the web content accessibility guidelines. I think what the W3C, the work that they do to build guidelines and standards for technology is awesome. I think a lot of what's already been set into the web content accessibility guidelines -- even though it's called web, it basically applies to virtual and augmented reality technologies. So that's the premise of my presentation. It's to tie the demonstrations that I showed you to those existing guidelines. Be accessible, ASAP. I loved just -- I presented this presentation in San Francisco, last week. So I've actually practiced this once, guys! And on the day that I gave that presentation, there was a tweet -- it was the iPhone's 10-year anniversary. And the iPhone, you know, has been an awesome device for accessibility. But Pratik Patel, on that particular day, he said -- happy birthday. It's the 10-year anniversary of the iPhone. But the 8- year anniversary of the iPhone being accessible. That's super powerful. I have a lot of respect for Apple and what they've done with iOS and making accessibility more mainstream and building it into the platform, but it's still interesting to look back on that and say -- okay, there's two years of that technology not being accessible and available. And it's the exact same point of view, really, with VR/AR. It really takes, hopefully, Apple, working on their AR kit, other technologies -- it takes these companies building that in, and making it available and accessible. We basically don't want -- a lot of things I'll be showing you are demonstrations, illustrations of how to make this stuff accessible. But it needs to be in a platform. It needs to be available for other developers. People that are just working and trying to build their first app or their second app in this space. They need guidelines and they need techniques for how they actually make that accessible to everyone. So my own history -- I started in 2002. That's me. Working at Chapel Hill. My first project was working with a student who was blind in the Classics department. To access ancient world maps. So this is very low resolution. You remember 2002. This is the kind of quality of graphics you could get, for those that can't see it -- it's a little hard to see the map here. But there's a lot of data points on that map. And for me, conceptually, in 2002, this was my first entry into accessibility, was: How would you take a visualization of an ancient world map and make it accessible to a person that was blind? Very stimulating for me. You know, we use a touch pad, we used haptic sensors, we used 3D sound. All of these technologies are all also very current in VR and AR. But I guess -- why show that this happened in 2002 -- is, you know, I've recently, as recent as this year, worked on map accessibility for the web. And seen that it's still pretty prevalent that maps aren't accessible to people with disabilities on the web. So even though, as an academic, or working in undergraduate computer science, I got the experience of -- okay, I've learned a way to make this particular map accessible, that still didn't become a pattern or a practice that enabled accessibility really broadly. And so that's a big point for my own experience, working in accessibility. Is that I do believe a lot in the standards, in the guidelines, because you can illustrate something, you can demonstrate something. I was really motivated in making this map accessible. But my own career path didn't really keep me working in maps. And 15 years later, I can still see there's quite a lot of barriers with maps and accessibility. My next job was working at Microsoft. First job out of school. Windows Vista timeframe. Exciting time to be at Microsoft. That's a little sarcasm in the tone. What was interesting there was that I was a student, coming from school. Coming to Microsoft, really working on, like, the dominant operating system for accessibility. And I guess I just point out that I felt like I did have a lot of responsibility in this space, but there was not, like, this huge resource of information on the web. Like, if I wanted to actually advance myself as someone pretty new to the area, it really came from, like, section 508, which was the US standard -- it was 16 lines of text. You know? There wasn't, like, this rich set of information of how are you supposed to do this? What are the guidelines? And so, you know, I did my best, but I kind of look at where we are now, and I see that it's really awesome that we have had enough time to build a lot of these patterns, document so many things, and obviously, if I was in this role, and had access to the information that I have now, I would have made different decisions, you know, on that platform, at that time. I also worked from 2007 to 2012, kind of worked more, tried to go Open Source, worked for Mozilla. I worked with the W3C, I worked on the ARIA, accessible rich internet application specification -- like, 2007 to 2012, I made a music player where you could search YouTube. No advertisements. And queue up a bunch of songs, and make a playlist. And I used that to illustrate how to build an accessible web application. Which, again, going to my point that I had worked at -- on the desktop with Windows Vista, and seen pretty much how to make something accessible -- still in this timeframe, it was very difficult to make something that used to work accessibly on the desktop accessible on the web. This was not a very complicated web application. But to build something with, like, sliders and with dynamic lists -- the patterns were being developed at the W3C, but they weren't really supported in the technologies. Now, technology like this would work really great. In 2017, the ARIA specification has matured. It's actually gone through this whole process. You can use Google Chrome, Apple Safari, Internet Explorer, Firefox. They all kind of work consistently. When I built this back in 2007, 2012, it would work -- most of the features in Firefox. So it would be like... Great. You can show someone an accessible experience in Firefox. But not any of the browsers that probably most users with disabilities are actually using to access the web. Then I've also done a lot of work in training. I just point up here in the more recent years, mobile development, iOS, Android -- you know, they have done a lot more to build accessibility into the platforms. But still similarly, I would say that most of the requirements, most of the work I've seen done in training people on how to make mobile applications accessible on mobile, they were similar to what had to be done on the web and on the desktop. The principles were the same. The APIs were slightly different. Conceptually, other than touch input, the APIs were the same as what I worked on in 2002, at Windows Vista. Which leads me up to the most recent project that my company worked on. We worked on the Winn-Dixie case, which I'm very proud of, of getting websites accessible to the web -- it was a website that was not accessible in the case, or inside of the litigation -- to show that a website was not accessible. One of the only ways to prove that in a court of law or to explain that with someone is to say you're not meeting this technical standard. So I'm gonna again emphasize the web content accessibility guidelines and the work from the W3C. The fact that that specification is so spelled out and so kind of black and white is something I think is very important for getting technology accessible. You know, kind of without being able to reference a technical standard as, like, an expert witness in a case. Like, to say that this needs to be accessible -- it's much harder to argue that with lawyers that aren't gonna be technology experts. But when you can point to a standard, you actually have something to base that off of. And so that's -- all of this is leading up to why I'm gonna -- when I talk through VR and AR, which is most of the demos, I think we should think of it in that lens, and we should be advocating sooner than later to be having VR and AR experiences apply into technical standards, to push this along. So we're gonna start with 1.1.1, text alternatives. Which is sort of the first accessibility requirement, when you think about the world wide web -- the first graphical internet browser. We had images. As soon as images were put into a graphical browser, we needed to have a text alternative for an image. We needed to have an alt text. So if you look at what's been done in the VR space, this is actually from 2010. Research from ELK Fomer and University of Nevada. He did research in Second Life. And I think Second Life is really interesting to look at, as a precursor to everything that's happening. Especially in virtual reality experiences. To say that Second Life, while not something that really went mainstream, did have quite a lot of users, and it has had a lot of accessibility work inside of it. So he and his team at the University of Nevada, they did a study to look at objects inside of the virtual world, Second Life. And they found that 31% of the 350,000 objects in the 433 regions that they studied in Second Life had basically an alt text of "object". Which I think should sound pretty familiar for us that have worked on the web. Mobile. Desktop. Basically like a worthless text alternative for the objects that exist inside of virtual reality. And I would probably argue 31% is probably much higher than that. Like, the other, you know, almost 60%, probably, are still not really great text alternatives. They just weren't the word "object", which is sort of the default piece. So now I'm gonna show you... Oh. I'm gonna show you a video. This is actually very current. I actually downloaded Second Life. I went into virtual -- well, it's virtual reality. And I want to illustrate kind of the idea of inside of Second Life, how text alternatives work. And just an experience that could occur in Second Life. So inside of this environment... I'm basically... My avatar is wearing an owl on its shoulder. It's great I can do that. But I'm looking at a plate of food, a table that contains multiple plates of food. There's asparagus, cucumbers, salads, plates, forks, knives, something cooking on the grill. Some bread. These are all things I can see inside of the virtual environment. But inside of Second Life, you can basically right click and query objects in that environment for text alternatives. And what's interesting here is that you basically have the same principle that, again, we already see on the web or on mobile applications, where all of these objects are grouped into, like, a single object. They're basically just an entire table of elements. So there wouldn't have been a way in this current environment to actually label individual -- asparagus, bread -- and again, text alternatives -- Shawn's presentation, we talked about text alternatives. Context is important. Maybe you only need to know that that table has a bunch of food on it, but again, in the context -- we don't really know how these things are gonna be used in virtual reality. Maybe it does matter that there's asparagus, there's bread. We actually need more specifics. So it's interesting that in the mechanics of the game, depending on how you group objects together, you have facilities to give text alternatives to those objects. In that particular object, so I right click it, inside of Second Life, again, not much different from what you'd see in the editor for xcode or Android Studio. There's a name and a description for that particular object, and the name of that object was sm-deluxe-dinner-buffet-v2-(mesh). Which again sounds familiar. I recognize these kind of horrible text alternatives. >> Question. When... As far as workflow goes, for a developer, when do you put those alt texts in? At the end? THOMAS: So inside of Second Life, they basically had a higher level object -- they do have these attributes, name, and description. Which you would be able to, as a creator of the object, and putting it into Second Life, you can set those. So there have been other projects, where they were set after the fact. But, you know, my position would be the author of -- whoever created this dinner buffet should be the person responsible for setting the name. And, you know, if someone else comes and views that buffet, that name should be set. But basically it could happen at multiple times. There had been a research project to set them after the fact, and basically, because most people weren't labeling them, the same team from the University of Nevada built an app where you could go in and add labels to objects after the fact. So it could happen at different times. Again, just showing this illustration visually that... You could, if those objects -- whoever designed this -- I mean, they could have been broken up into separate objects. They didn't have to be grouped together. Maybe the asparagus should always be on the left of the cucumber. To follow certain dining rules. Some culture somewhere. But again, they grouped everything together. So you wouldn't really have that mechanism to individually label elements. But I think the point is that, especially as it gets more sophisticated -- I mean, this is still from like -- probably an older build of virtual reality. I mean, you have objects. When you are designing things in virtual worlds, you are working at a smaller object level. And you could give individual alternatives and give higher grouped alternatives. So here I'm showing -- I decided to go to the beach in Second Life. I need to relax. And here we actually have a good text alternative. So that there's a bar -- I think this is somewhere -- Honolulu. It actually had a correct name property for that element called Bahia Tiki - Honolulu Tiki Bar. So it had a good text alternative supplied for it, at that particular level of that particular world. And what's cool about that -- and again, I illustrate to show that all of these things already existed in Second Life. So we could have this currently in VR experiences now -- they had actually built these virtual assistant guide dogs inside of Second Life, which are AI bots, basically, that could lead digital avatars to named objects. So you could tell the guide dog -- I wanna go to Bahia Tiki Bar. If it's labeled correctly in the virtual world, the guide dog is actually leading your avatar to that position. I think that's really cool, that they have already built AI objects in that space, that had that type of logic, and had that interface. Also calling out visually in the screen, there's also a woman using a white cane to access the environment. So again, it's not saying you have to use a guide dog in virtual reality. Just like you don't have to use a guide dog in the real world. And I think that's pretty cool, that they've built these objects and they've built these experiences into that. I'll touch on this later, but I do think this is something obviously missing from experiences where you're trying to represent yourself in the digital world. It's pretty rare to see people with disabilities represented as, like, avatar options. And so, again, I would have that... It's actually not a WCAG guideline or a web content accessibility guideline. It should be. But I think it's cool that that happened there, and again, it's cool that those objects -- the guide dogs actually had, you know, AI implementations that could use the accessibility alternatives. And then this is sort of my point here, is to look at our current world. Basically there's a lot of people designing objects for virtual experiences. And you basically sell these objects -- you typically sell these objects through, like, an asset store. So Second Life was like one big world, where it kind of was its own economy. But in sort of current space of, like, Unity and these other platforms, it's like a development platform where you grab these objects and you bring them into your game. And so my point is that, you know, if I search for, say, asparagus inside of this asset store, because, you know, I'm just obsessed with asparagus, I can search for that in the asset store, right? It has a label. In this case, it's actually a file name. But in several of these environments, they already have titles and descriptions. So again, this feels very reminiscent of people saying... You should have good page titles. You should have good heading tags in your document, so that Google can index you. Obviously if you're trying to sell your virtual objects, they need to have good descriptions. They need to be very descriptive. So it's kind of a little much, to quote myself here. But I'm just gonna do it. Capitalism and text alternatives... I think, hey, if you're trying to sell stuff, which we are, in America, all those things, all of those descriptions, all of those titles that you're typing into these stores... They should just carry through into the virtual environment. If you're already supplying that information for somebody to search, to find it, to buy it -- it just should carry through. I felt the same way about Facebook stickers, emojis, all of those other concepts, where we had those digital assets. Sometimes you see those carry through. Sometimes you don't. But I feel like that's something that could very easily happen now, that people are already supplying these descriptions. But, for example, in Unity, there's not a property to look for, like, the name of an object. The description of an object. There's not an ecosystem yet. Right? Where someone would build, like, a virtual guide dog inside of Unity, and let someone query what elements are in there. But it could happen soon. And it's 1.1.1, text alternatives. All right. So now, this is an augmented reality. Sort of same concept, though. That I saw this presentation, a previous version of this presentation, at CSUN. The disability conference in California. Dr. Areganz from the University of Massachusetts, Amherst, basically built virtual renderings of real world spaces into virtual reality, and labeled all of those objects. So you could either explore, walking around in a virtual environment, at your home, maybe with an orientation or mobility specialist, or you could actually go into the real world place and have the reality augmented with tags and other objects to actually know what room you're in. So let me just play the video really quickly. Just a short part of the video, to show the student navigating at the University of Massachusetts, Amherst. He's actually gonna be looking for the Disability Studies Office in a certain building. Ooh, and I didn't test... The sound is not playing. I'm on the HDMI. Yeah, we'll use our captions. (laughter) All he has to do is scan his percept tag, and he's told he's on the second floor of the environment. And now he has a list of destinations in the building on his Android. So he can type, he can basically go through that list, he has an edit box where he can type in numbers. And so he's typing in the room number 20... Sorry. 230. Or 233. And there's a filtered list that's occurring. So he's basically filtering all the rooms inside of that environment. And then he's receiving a set of directions that he can then follow. It'll say walk forward 20 steps. There's gonna be an elevator. Look for the button. There's an elevator instead of stairs to access each level. Your current location is the... So at each of these main waypoints, there's a Bluetooth -- I'm not sure if it's Bluetooth -- but it's a sensor that interacts with the phone, that makes a chime sound, to basically tell you that you're at the next landmark. And that's actually what updates the directions in the phone, to say that you've gotten to the next sequence. So I think that's actually... Right here, he got to the elevator, and his phone is telling him... Turn right. You will reach an opening. This is an intersecting hallway. Select next instructions button. >> (inaudible) THOMAS: Yeah, that's what I liked about this project. It was built natively into using Talkback on Android. It's the native Android Talkback voice. It's just an app running on it, and they load different environments. The one I saw at CSUN was the Metro. So you can tell if you're on the blue line, the yellow line, in Boston, downstairs, upstairs. And it was cool. They let people practice it before they went to the subway in virtual reality. So you could be like -- oh, the ATM or the machine to buy the ticket is to the right here. The place to walk through... Is just really awesome. And I think that's an awesome combination of, like, VR and AR. Where VR, you practice it at home. You know? Kind of safely get an understanding of the sounds of that environment. And then AR, when you're actually in that environment, you can get, you know, awareness of where you are in, like, internal structures. And I guess that's the big thing with the work they're doing, and lots of other groups are doing. Is to have a way to do internal navigation. GPS doesn't typically work inside buildings. So this is a mechanism that shows a way to get really specific instructions inside of a building. And again, authoring all of these instructions, that's similar to authoring the virtual environment. And it's again why I think it's interesting. Someone has to go in, into the rendering of the environment in the app. And label all these things. These prompts are like a coded mechanism. So it is just basically taking different text alternatives like room 2113, or room 200, and it's inserting prompts around that. So there is an object structure that needs text alternatives to label... Basically the architecture of that building. So that someone can use this app. But again, what I thought was cool about this was that they didn't just build it like a one-off demo. They had a whole infrastructure. How would you apply this to any built environment building? And this is where I look at the work I did at Chapel Hill, for example. So when you see these types of projects coming from academia, the only way we would see this in a mainstream -- actually in every building -- is if it became a standard. Sort of like applying to the Americans with Disabilities Act, or applying to some other app, where -- if you build a building, you need to label all the rooms. You need to have these things working. So we see this coming from research. But I think it's really great. I think you should be able to navigate anywhere indoors with your app. Another one I wanted to highlight was the Apple Watch. I had tweeted this. I don't tweet that much, but this one was worthy of a tweet. Mali Wat is a woman who's deaf-blind. And she uses the Apple Watch, integrated with Apple Maps, to navigate. So she's says 12 taps means turn right at the junction or three pairs of 2 taps means turns left. So basically without having to listen to sound or audio directions, there is sort of a rich vocabulary into the watch, for ways to take directions. I literally took her quote exactly. I don't know what the hierarchy is of all the vocabulary of directions. But 12 taps, turning right. Three pairs of 2 taps means turning left. I just think that's really cool, to have a way of saying -- again, we don't have to only rely on audio as the other mechanism for navigation. So the Apple Watch is already demonstrating that there is at least a vocabulary starting for also taking vibration and taking that information to navigate in the world. All right. So moving on. 1.2.2, captions. So this is a display of AltspaceVR. I showed AltspaceVR, because they're -- at least from my understanding of VR -- they're like the Second Life, but for the Unity platform. They're a very popular social place to come and meet up with people in virtual environments. So they do have the ability to display chat information. And I did have to censor this screengrab, because, you know, people aren't always nice in chat. But the idea is that this does work with CART. If you have a chat mechanism inside of a VR space, you can have an awesome stenographer like Mirabai Knight providing realtime translation. This text that we're typing right here could be displayed inside of this chatroom, inside of the VR world. And so you could participate with a presenter inside of a VR world, if you have a way to display text captions. And again, what's interesting is, because there's so many people building experiences really quickly, I think it's good that AltspaceVR has this alternate mechanism. Some mechanisms say -- no, you've got to use your voice. I've gone into a few worlds where they don't have an alternative, and so in that space, those would be totally cut off from a person that's deaf or hard of hearing, because they're basically saying you have to listen to speech. This is just to show another example. This one's gonna be kind of chaotic. And maybe... As I was practicing this with Mirabai today, we were saying... Well, people should just all learn how to type at a stenography speed. But on the screen, we have... The primary speaker from AltspaceVR is kind of this black avatar with blue... Call it a neckerchief or a mohawk. And he's talking to people -- blue headphones. I need help always with these descriptions. He's talking to five or six people, five or six avatars, that are just basically white avatars with no arms and, like, blue and green ties. Oh, wait. The audio is not actually playing. I'm gonna play it... Actually, hold on. I'll play it out of my computer speakers. >> A little learning curve, but it's awesome. >> It's running. It's rather smooth with all these people here. >> Yeah, fantastic. THOMAS: So inside of that space, my point is that you have avatars representing people, and usually in the space you do have labels to say it's so and so speaking. So the real ideal is that if you could have the text alternatives -- for the captions -- be associated with the avatar, so in this example I'm showing... Another rendering from Second Life, and there's an avatar in a wheelchair, and there's, like, a thought bubble above him, saying "Can you give me directions to the park?" So inside of this, if you were another avatar looking at this going on, by having captions associated with the specific speaker, you know who's speaking it. And I think, again, the complexity, just like if we are all sitting inside of this environment and Mirabai needs to caption who else is speaking in the room, you just have the exact same elements in the virtual world. But I guess the exciting part in the virtual world is you could potentially, you know, tag those to potential speakers and have it -- have directionality, just like you have captions on television and movies sometimes appear under who is speaking. That's something that I think would be cool to start seeing in VR worlds. And I guess some games you would already have that. Right? It wouldn't be realtime. But a lot of programmed games do have captions directed to who's speaking. All right. Sensory characteristics. Let's see if this demo works. So I'm playing sound from the back of the room. This is my gimmick there. What we know is to give... When we have a sound description, putting, like, (knock- knock) or description of the sound into brackets. Is there... One thing that I don't know... And I guess I'm curious what Sveta's input or other people's input... I don't know, really, directionality. How, inside of captions, you explain if something's happening behind you, or in front of you. But this is something that I do think is very unique in VR space. AR space. Is that it is in a three-dimensional place. So you do have a lot of current experiences even using things like a knock at the door behind you. And that's supposed to be your prompt to, like, turn around with your head display, and look behind you. And so in that case, the mechanism to actually display a notification of where the sound is coming from, vibration, text, flashes -- I think that's something that needs to be put into the guidelines. It needs to be part of the captioning. Is, like, because that's gonna be more and more frequent, I think, in the VR experience. Now I'm gonna show you a Bjork demo, because I like to just keep rolling through demos. I've got James Hearnden from Equal Entry, that I work with, to write this description. Because I was like... It's just Bjork. I can't describe it. But here's a great description. Bjork's eyes are closed as she sings into a stage microphone. She wears a hot pink carnival mask with sequins and a veil, which gives her the appearance of a radiant jellyfish. Which I think is a good description. So Bjork is always innovating in the space. Something that I was always stimulated by. And the performance of her most recent album was that she had a visualization of the sound going the entire time of the performance. And this is something that I think is -- could be really interesting. For people that are deaf and hard of hearing. To enjoy musical experiences more. You basically have a way to see when the bass drum, for example, hits in the song. There's gonna be like a circle that displays at the bottom of the screen. Every time the kick drum or the lowest tone drum plays. And there's going to be visualizations of the directions of the notes that the strings are playing. And Bjork's voice itself is also very expressive. And for me, just watching the concert, it was really interesting to watch the pitch of Bjork's voice basically move up and down visually, in the performance. But I just showed this as something that I think is interesting in augmented reality. Is that this concert -- the entire time, they basically used the same visualizations. (strings playing long notes) Basically, the pitch, when it goes down -- we're seeing these circles representing notes go down. When the voice starts, which is gonna be the next line, the pitch is moved up, and it's visualized differently. As we move through this performance... I've got to find where the drum is. You can see in this screenshot, there's basically circles that represent every time the kick drum hits. You know, and throughout the show, they had different visualizations of all of the notes that were being performed. And I think this is something that now, with technology, it's pretty easy for people to run these visualizations on sound. And I feel like you can see a lot of things in current games. You know, on mobile devices, even. Teaching people how to sing. There was a recent performance on America's Got Talent of a person who is deaf, who sang -- you know, really beautifully and perfectly. As a performance. And one of the things she mentioned in the clip package was that she used visual tuners to tune her voice. You know, to actually understand the pitch and know that she was singing specific notes. And I kind of immediately related to that. That there's definitely a connection of the visualization of sound and what's possible kind of now, to make an augmented reality have more information than just, say, captions. Especially for music. All right. So moving on to use of color. This is another augmented reality. I showed this to say that this is almost the exact same plugin that you can run on your web browser. So, like, color doctor. You can look at something through this. And you put on these glasses. It's called SimViz. This is also from University of Nevada. And while you're wearing the glasses, through the eyehole, you're looking around the real world, and you can turn on the color filters to simulate the different colorblindness settings. So I always think back to when we had the presentation about colorblindness here, and the way that sushi looked, for example, through a colorblindness filter. Sushi kind of looks really gross. With certain types of colorblindness. Looks grey or black. So you can kind of get that experience with this augmented reality, which you couldn't when you're running this on just a web display. But if you're actually looking around the real world, you can actually see what's the real world look like to someone that's colorblind. And I think, you know, just interesting to say that there's no difference in the requirements around communicating information through color. But with augmented reality, you can start testing the real world, in addition to the digital world. Resizing text. So I took this quote from Ian Hamilton -- did an awesome article, which is linked to in my notes. About common challenges in current AR/VR games. And one of the big ones is the text -- the size of text, as it's displayed in these environments. So Jesse Anderson said the Oculus Home Area looks pretty cool, but it's pretty much unusable for me. The main tiles area and smaller boxes on the left and right are purposely set at a virtual distance in front of you. There's no way to look closer or zoom in. So zoom 200% -- 1.4.4. These things are exactly things that in these environments, you shouldn't be setting -- that you've got to look at this text from this perspective and from this size. But that is actually kind of common in a lot of the current games or experiences that are put into these environments. So I'm showing from -- this is really awesome. I recommend if anyone wants to download Second Life -- I had a lot of fun in this... This is called... Virtual ability orientation path. So there's a company, Virtual Ability, that's done a ton of work inside of Second Life, and thought through these accessibility principles. So they actually have a world that you warp to, that you get all of these instructions of how to use the world, and how to modify it for accessibility. And they have instructions on how you enlarge the text, you know, exactly what we were looking at there. And built into Second Life, there is this idea that you can walk up to any piece of text -- so on the screen here, there's, like, a billboard of just, like, black and white text. A lot of text. It's hard to read in this screenshot, but you can walk closer to it. You can basically zoom into it, inside of that experience. Like, every object in Second Life does support that type of zooming. So I would say, like, if a company's looking for a pattern of how to do this, it does totally already exist in that Second Life type of environment. The next one -- 2.1.1. Keyboard accessible. So I guess when you think about the virtual world, like, it's still interesting to look at it just from keyboard access. It's like... People shouldn't have to use motion controller wands or a treadmill that they run on. There's a lot of crazy demos we've seen of virtual reality controllers. I guess just the big principle is: People shouldn't design experiences that require you to use a specific type of interface. It could be that a person has no arms, so they can't use the wand controllers. It could be a person that has no legs. They can't use, like, a treadmill-type interface. So the whole principle of keyboard accessible is really about: Don't make your experience require a user to have a certain type of input. Again, showing the screen here, there's pretty varied differences currently, in the things from different companies. HGC Vive has two wands that you're supposed to hold in your hand to interact with the experiences. The Magic Leap -- you don't hold anything in your hands, but you do basically wave your hands in front of an infrared thing to control. The Google Cardboard you have to be able to hold the cardboard up. I guess you could get a strap to put that onto your head. But you have just different types of technologies, and they may or may not have requirements for input mechanisms. So people that design and develop games need to be having this thought process of not assuming that everyone's using the same input mechanisms. I think currently that's kind of a good thing, that we have this huge, diverse ecosystem of devices, because it sort of forces people -- again, from the capitalist perspective -- not to design for, like, one input mechanism. They are already having to make sure they can work with, like, as many of these devices as possible. But if one became dominant, I think you could easily see that people would start saying -- well, of course you've got to use hand wands to interact with this environment. And that's where you would start having these violations, that someone has to use this type of interface. >> Google Glass. THOMAS: I'm not actually sure if there's a left sided... >> Every one I've seen has been the same. If you don't see out of your left side... THOMAS: Yeah. Again, I think that's the whole... That's the whole thing. Is that there's a lot more scenarios, I guess, to consider in the environment when you have sound always in there, and you have these movement mechanisms. Some of these experiences are really just -- it's something you're wearing on your head. HTC Vive, for example, you are walking around in a room. And so you do have, you know, across the different devices, different challenges that pop up. This was another quote from Ian Hamilton. So he said... Even if your vision is one of room-scale VR with 360-degree head movement and full hand tracking, that person who is playing without any horizontal head movement or locomotion at all, just using one stick and two buttons on a control, may still be the person who gains more from the experience than anyone else. And I think that is really interesting. If you look at some of the biggest adopters of people in Second Life, or at least from what I could see, from going around in Second Life, there were actually a lot of people with motor impairments, people, amputee communities, inside of there. And so it almost seems to say like... For people that were trying to build these experiences, maybe your first customers and the people who use your experience over and over, they might be using that type of input and they might be the biggest fans of your experience. Similar to any other technology we work in. But I thought that quote was great. And we basically just don't want to preclude anyone from enjoying these experiences. All right. So this is basically the last big demo I had. This is a game that I was playing in HTC Vive. It's called Audio Shield. And in Audio Shield, you use the two wands to hold -- each wand gives you a shield in the virtual reality experience. And then these balls are timed to the music as they play. You try to block the balls with either a red or a blue shield. So in this video, it's going to be... Some music playing. (creepy laughter) (peppy pop music) THOMAS: Just gonna not play a long bit, but just to say that visually it's kind of very overwhelming. There's a lot. It's quite bright, and, you know, in my own experience playing this, I really liked it, because I like to dance, and I actually felt like -- oh, maybe I'm getting a workout in this environment. Maybe I could get a great workout in this environment. But there's a lot of background information. It's actually... It was very hard for me to actually stay in the experience more than, like, one song. And so in the... I guess just to illustrate... I did what I would recommend to anyone else to do, if you wanted to have an experience, I actually searched out the developer. I found the developer. I told him -- I'm a huge fan of the Audio Shield game for HTC Vive. I think it's great. It's part of a home workout. Enjoying using my music with it. I wanted to request when the product gets updated, to have a dark theme or some kind of less bright color option. I enjoy the game but would like to have a less bright background or bright colors for the balls option. That would allow me to play longer. And he was very receptive. I thought it was very cool that he wrote me back with less than a week. Delin Fitterrer is the development of that. And he says -- here's a modified stage dive skin. He says you need to change a couple lines of code. Luckily for me, I can do that. It's available for me to try out. He called it Use Dark Minimal Mode. I had to set that to True. It's very simple logic, but I guess the point that I like about showing how simple this was -- he had to design that, and he did do that for me. It's really cool. But it would not be much harder for that to be a platform-specific setting, just like on your iPhone. Developers on the iPhone can query -- does someone have high contrast settings requested on their phone? So if there had been a way for me just to request that, he could automatically set that mode or set that experience for me, based off of the user preference. And then other developers could also understand that. So I love that he did it, and I love that it actually, like, worked really well for me on this screen. I'm just showing a side by side, to show that in the dark minimal theme, just taking out the black background, it really did make a difference for me personally, like, using the experience -- I did actually -- was able to play it longer. Hopefully burn more calories. Yeah, or time. And I think it's just the point, though, to say that I don't see any difference in what he did there to what you would already be doing in the iPhone experience or a web experience. But the problem is that if there's not that platform mechanism, it would be individual requests. And it would not be, you know, a really easy or discoverable type of thing for users of these platforms. And then wheelchair and reach requirements. I want to get out of WCAG and say that something I thought was super awesome in virtual environments was this project was -- I'll have it linked in my documentation. I can't remember which university. But they basically measured individual users' reach abilities. And so, you know, again, just common accessibility mantra. People aren't all the same. One person's reach distance is not gonna be the same as someone else's. The ADA, you know, has sort of a range. But they have a minimum range of a reach distance. But this was showing that in a VR environment, you could measure it specific to an individual. And then show them -- basically an apartment they might be choosing to move into. Tell them if they would have trouble reaching the counters in that apartment, or getting through a doorway. So it's like a personalized way to look at potential apartments that someone was moving into, to say you would have an access barrier here. And again, I think that's a neat merging, similar to the orientation and mobility for a person who is blind. This is a similar idea for someone in a wheelchair, to be able -- or with limited mobility -- to be able to explore places before they go there, and find out it's not gonna be accessible to them. So, again, cool that it's research. But it would be awesome to see that as, like, a mainstream item. Also in Second Life, they've actually made seating that's welcoming for people whose avatar is in a wheelchair. And I thought that's just, again, super awesome. Like, these environments are often built exactly to scale. Like, architects currently use VR/AR to show people -- hey, this is the house that you're buying. This is what it would look like. So they are measuring everything to exact dimensions. So it's just crazy that in VR you would have the exact same requirements. You actually don't want to build seating at, like, a virtual event where someone would be going to see a speaker present, where someone in a wheelchair can't get up to a certain location to sit and view it. And I guess those types of things have happened in Second Life. And it's something that that company, Virtual Abilities, learned from working in that environment for so long. That these are sort of design constraints you should think about in the virtual world. Here's a wheelchair ramp going down to the beach. I like that the beach I chose to go to in Second Life did have a wheelchair ramp. It's the only one I would go to. And lastly, I guess just continuing with what I touched on in the very beginning. Virtual identities, being yourself, not being yourself. I guess this is something I think is super stimulating in the virtual environment. Is this idea that people that have -- you know, say, a physical disability, such as using -- having amputees, having no legs, and using a wheelchair, or having crutches, having any type of assistive device, it seems like all of these have already been gone through inside of Second Life, and there's a whole array of options of these products, basically, that I think some of these might be for sale, some of these might be things you just get. But the avatars can use these objects and can present themselves that way in the virtual environment. And I think that's something that, again, I think it's missing from almost any digital representation of avatars in most experiences. So it's neat to look at that in Second Life, and potentially learn from that. I can't even remember -- I was trying to be really current here. But this was to say, like, there's this attack or there's this... You know, concern when people portray themselves as, like, someone from a minority group that doesn't actually have that disability. And this is a movie, I think, called Blind, where Alec Baldwin is playing a blind actor. I guess we would have this exact same problem in virtual reality. I guess if we make these available, people can take on these identities and at least from what I've seen, I guess, you know, kind of inspect that on the internet -- but I have seen it, I guess, go the other way, inside of the virtual world, of people being inappropriate with those avatars. I show that as, like, well, technology and sort of the open world of technology -- I think we would expect to see other mechanisms of that. In this environment, there's kind of an Android version of an amputee, an elf version of an amputee, and different kind of... Maybe sexualized portrayals of amputees, which I think has also been popular in Second Life. So I guess it kind of comes with all of these things, that when you create the ability to have a virtual world, I guess you have to consider all of these discussions. >> So I just have a question in terms of that. Because, you know, when I spoke at NYU, in front of a bunch of doctors, they're assigned as part of their training to spend four hours in a wheelchair. And I also... I participated in a webinar with another group -- for those of you who don't know me, I run workshops on self-esteem. So there was self-esteem programming they were doing, heavily based in research, and one of the things that they were doing with participants is allowing them to experience Second Life. And the woman who was running it loved it, because she could be an able-bodied person within this world. And I was a little horrified for myself. I mean, based off what I would want. Not, obviously -- it's up to her what she wants. So I don't know. I guess what I'm trying to get at is... What do you think that line of appropriateness or inappropriateness is? Because... Isn't there some value to having empathy or having experiences... For example, my best friend's three-year-old daughter, there's a cartoon where they have a little girl with braces and a cane. So she started walking around as if she had braces and a cane, just because she wanted to be like that little girl. And, you know, I thought that was actually great, that, you know, just like any girl would want to be a princess or a unicorn or whatever it is... That she was actually incorporating this very sort of new thing. I guess it would be just a general question. What do you all think is appropriate or inappropriate? Because it's really... It's a delicate thing, I think. THOMAS: I basically agree it's a delicate thing. That's one of the reasons I wanted to illustrate... This is so not mainstream. Most people probably don't know about the references and the renderings in this world, but I feel like it does open up all of those questions. Personally, that's why... I don't know. I think it's... It is a good topic for discussion. Anyone have... >> The person with a disability (inaudible) is an option... I guess the... Oh, sure. I'm thinking that there's probably more... Sorry. There's probably more sensitivity about somebody able-bodied simulating someone with a disability than there is if the person with a disability opts to go into Second World and experience it without. So I don't know the answer to the question. But I'm thinking that it is probably more in one direction than it is in the other. That the sensitivity exists. >> For me, personally, I feel like if somebody makes that option, as long as they're doing that... And you can't regulate intentions, I guess. But I mean... For me, if somebody's actually genuinely curious... What would it be like, if I were in this wheelchair? I wouldn't personally be offended by it. But I guess... How can you really say that? I mean, you don't know what people's intentions are, necessarily. But yeah. So it's just something I've been thinking about. So just posing that as a sort of thought bubble. >> One more comment around that. Sorry. One more comment around that, just from our own experiences at Google, where we talk about doing things that can be empathy-building, and the concern there is... For example, this idea of very simply putting on a blindfold and then simulating blindness or sitting in a wheelchair, and then you'll know what it's like to be somebody who is in a wheelchair, and of course, anyone who is in the community knows that that two-hour experience is not it. And indeed, might have the opposite effect, where you conclude... That's an awful thing. Not realizing all the other mechanisms and abilities that people develop. So that might be the sensitivity, about... I think I'll just try it on, and then I'll deem myself an expert in that area. >> Well, actually, that was one of the comments that I made at NYU. And I was like... Because they're all -- oh, four hours is a long time. I said... If I were the professor, I think you guys got off easy. I would have given a whole month. Because then you develop -- right? Because they were... They even admitted that they picked a four-hour period, where things were easier for them, so they wouldn't have to, let's say, go on a date in one of these. So obviously how real can it be? So yes, yes, yes. All that being said... I agree. >> There was a comment in the back. I just wanted to add about the movie, about the blind character... Played by a non-blind person. That's kind of a different story. Because that's not really related with the game world. But more talking about... Employment. For actors with disabilities. In Hollywood, there are many great actors with disabilities, but less than 1% of actors are hired for the roles. So many roles with disabilities are mostly played by actors without disabilities. So many people in the disability community are upset about that. So that's kind of different than this portrayal of people. And also, people without disabilities -- still can't show emotions of what it feels like with a disability. It's better if you hire somebody with a disability for those roles. Good actors, of course. (laughter) THOMAS: I totally agree with that too. I was in a quick referential moment there, saying... Trying on identities. But the major issue that Sveta brought up -- I agree -- is about employment. And ensuring that people with disabilities have opportunities to have those opportunities in the entertainment space. Did you have a comment too? >> Okay. Yeah? All right. Cool. Totally agree about the acting. You know. Getting actors with disabilities is very important. And I think there is this strong... The community is just strongly against simulations of disability, because you never know what the person's gonna get out of it. And it's... Most of the time you assume that it's just gonna re-affirm negative ideas they had about the disability. I sort of found a way to... I end up doing a lot of disability awareness trainings... And I found a way to sort of flip that inside out. By not simulating the disability, but actually simulating the experience of inaccessibility. And so I just like -- I asked for, like, a sighted volunteer, and I hand them a sheet of Braille. And I'm like... Here are your instructions. Go ahead. Read them. And of course, the answer is... I don't know this. And I end up sort of having the social attitude that goes along with it, and just being like... Well, you don't read Braille? Do you read at all? What do you read?! And just kind of like... Sort of giving them the usual workaround of... Oh, I'm sorry. I didn't have enough money in my budget for print. Or... Try to request it ahead of time next time. And I'll try to try to have it. But probably not. (laughter) And then... Cameron actually helped me find this form online, so I asked them... Hey, you can fill out this form on my laptop right here, and I'll email you the info. And it's a form that's screen reader accessible, but there are actually no visual labels on it. It's just, like, some boxes and dots on a page. So, like, in that way, I think, it sort of simulates what it's like to be in your own shoes, but have that experience of not having access to something. You know? In the three ways -- like, the physical sense with Braille, the social attitude that I'm giving them, and the virtual component with the laptop. >> Is Unity the dominant platform for developing VR environments? THOMAS: I think there's at least three. But I think -- Unity is the one that I just personally used. So it's the one that I have the most experience with. But I believe Google and Facebook both have environments. And there's also another major game. Unreal? Not Unreal? Is it Unreal? Yes. It's like Unity. >> Is there any... I guess... Specifically screen reader accessibility features in Unity... Like, (inaudible)? THOMAS: So my understanding -- I believe any time there are these accessible experiences in this environments, it's basically you created a dedicated... You know, you created the speech interface. You created the captioning interface. So you basically built it yourself. So there are examples of those. But I don't believe there's any way that you could tell someone that had built something else... Hey, implement this. And you'll be accessible. Like, that's what I'm not... I'm not aware of any, like, API or way that you would take advantage of, like, a system-level thing. You have to build it yourself. So I'm gonna end with this last example -- was from Chapel Hill. This is where I actually graduated from. I just love this example, just to take it out of tech a little bit. This was students -- basically, they have a day where people that are low vision and blind come to Chapel Hill, and the undergraduate students and maybe some of the graduate students build different experiences for students from high school and middle school students from around North Carolina, to experience. This is basically an augmented reality one of a Nascar experience -- it is North Carolina -- where we have a subwoofer, and we have students that are actually on the ground, shaking a chair and simulating the experience of, like, going around curves. I'm just gonna play this. But just to say that... You know, low-tech is good too. And I think the experience matters. There's so much more -- any time I talk to tech, I do like to say that sometimes the most fun experiences are just interacting in the real world. So we'll play this clip at the end. (car noises) (shouting) THOMAS: So there was also a fan blowing wind in her face as well. To, like, simulate that. And I just think it's awesome. You didn't have to program a bunch of stuff in Unity. You just need to set this up and make this experience. And by far, that was the most popular experience of all the -- most of them were computer experiences. So I filmed -- that's the one I filmed and thought was really interesting. So to close, talk to VR/AR companies about accessibility. All the demos and videos and research papers that were sort of run through in this presentation are all at this bit.ly link, and we'll post that as well to the Meetup page. Thank you. This is my contact information. And I have time for a few questions at the end. Thank you. (applause) Questions? Comments? >> So I really like the UMass Amherst example you showed. That was awesome. What came to mind was something -- I know a lot of game devs are excited about right now, is SLAM, the simultaneous location and mapping. That is in Apple's AR kit. I don't know if you've seen any application of that in accessibility. Because it seems like that would be an amazing application. THOMAS: I have not seen anything with SLAM. That's a new acronym for me. Now I have to read about SLAM. >> It's like the dream of AR, where you can permanently put things in artificial... In virtual realities. So your phone will effectively know that... Where it is, relative to the world around it. So it seems like (inaudible). THOMAS: Well, I'm interested with Apple's AR kit. I have not done anything with that, personally. But I would have, I guess... Pretty good expectations that Apple is putting accessibility stuff into that. But I don't actually know myself. Yeah? >> I was looking at some of the recent developments that we have. Especially when it comes to low vision and (inaudible). I was just wondering... Is that a space where (inaudible) analyzing the VR role and doing some sort of automated captioning or retrofitting objects in the virtual world with metadata? Do you think that's something that's, like, could help reach... We have the same problem with most developers not bothering to add the correct metadata. And in VR, I kind of feel like... Most (inaudible)... So there needs to be a good (inaudible). THOMAS: So I think this tag -- which I didn't talk about -- it was cool. That was the idea of that project, was -- I don't know what logic they put in, but it did look at Second Life objects and be like... That's a chair. That's a table. So it did have some type of analysis it was doing on the object, I guess, properties, to identify them and add the metadata. I would say, like, I think it's great to have that. But the perspective I come in is like... If the author that creates it... If we had a requirement and a way to say, like, you have to provide it -- it's just always gonna be a better alternative. If the person that creates it... But I think there should be continued work in that. And I mean, as this quote was showing, we have to assume it's going to be the same. At some point in the virtual reality -- so 350,000 objects, 31% just called "object". Like I said, I would like to see the rest of it. Because most of that is still probably not a very descriptive text alternative. Yeah. >> Computer vision is very good at some categories of recognition. Faces, some types of objects, text. But the problem is, when you start to get it wrong, you have to have an extremely high confidence threshold, or else you're creating a misleading experience for someone who's blind, for example. So that said, like, having an augmented experience where you can say that there's 20 people in the room... That's interesting. But it's not a replacement for basically... Human descriptions. And I think... I agree with Thomas that the best time to do that would be either when you're authoring the content, in the case of a VR space, or in the case of, like, an augmented reality, you kind of have to offload that to, like, a third party service. So it becomes expensive in that case. Yeah. >> It's such an interesting idea. Whether there are any attempts to, say, crowdsource. (inaudible) >> Yeah. There's an app called Be My Eyes. Which is a service for blind and low vision people who can request help from sighted individuals through an iOS device. Basically an army of volunteers are on call, essentially, for really... Kind of like Taskrabbit-style mechanism. Where you crowdsource it -- but in aggregate, it is really meaningful. The other... >> The Museum of Natural History here in New York -- they're crowdsourcing image descriptions for pictures they have on their site, but also what they have. Because they have tens of thousands of images that have no descriptions. And so it's a site that you can sign up for. The thing that they're doing that's kind of cool is you cannot only submit descriptions, but you can also vote on descriptions written by other people. So they can quickly identify, like, oh, this description happened to get 50 downvotes and one upvote. It's probably crap. This image got a whole bunch of upvotes. Some downvotes. It's probably okay. (microphone feedback) >> I would also like to comment that... (audio dropped) THOMAS: Mirabai lost the audio. (tapping microphone) THOMAS: Okay. We're back. Sorry to interrupt. >> Yeah. That's built on an Open Source framework that's used to crowdsource extracting metadata out of digital assets. Digital... (inaudible) >> Yeah. (inaudible) the New York Public Library. >> I think the hardest thing with augmented reality is... It's the realtime the aspect of it. Like, doing it asynchronously, doing it in a crowdsourced fashion over time works pretty well. But if you're walking through a space and you need some... Any level of sophistication or description of content, right now... Well, I think you would need to pay for that. You would need to Mechanical Turk it. And that's very expensive. >> Yeah. I didn't share this video, but I would like to show it. It was also a very cool project from Carnegie Mellon. That they did. It's called Facade. Facade was basically an idea that someone could take a picture of a device inside of their home, that they basically need to use. The example was a microwave. You take a picture of the buttons on the microwave. You upload the picture to Mechanical Turk, and they do a 3D printed -- like, to scale -- overlay that they ship to the people and put it on the device. And then all of the buttons have the Braille labels for the device. So it was really cool. Like, they use a dollar bill to set the scale. You know? For how to print that out. But I definitely think that's a really cool idea too, that it's like... Of course, not everyone's gonna have a 3D printer at home. But Carnegie- Mellon's done a lot of Mechanical Turk projects. And I do think they've done a lot of that work, of figuring out how you make sure that Mechanical Turk contributors -- some of the fault tolerance. That they didn't just do a bunch of tasks. That's linked to in my links. This video is definitely really cool too. It's a neat idea of how you can take AR, 3D printing, and crowd sourcing, to make old hardware, old devices, have an accessible interface. Yes. Antonio? >> It's also the right tool for the right thing. When, for instance, Be My Eyes and (inaudible). Those are two virtual vision apps. One is very expensive and paid for by professional describers. And I would love to use that to navigate the streets of New York. Because I miss out on New York when I'm walking around. I don't know what I'm going by. On the other hand, when it's something like a sunset, I took Be My Eyes and said... The sunset is at 8:23 today. I went out in the backyard, and Be My Eyes was open. And that day, in Brooklyn, I was connected to someone who described the changing colors of the sunset. I didn't know there was the shape of a sphere that ends up disappearing off of... In front of you. The different tools for different things... You have to pick the right thing to work with. I would love to someday just walk and know what I'm walking by in New York and get a sighted description of that, with, say, headphones. And it would turn out to be something like that, with the right tool. >> So the wild and crazy futuristic... How do you actually finance some of these things? But I think that relay services are paid for on the telephone. By some insignificant tax that everyone pays on their phone bill. And that's what enables anyone to use that service, and it's free of charge to the users. And if instead of a fraction of a cent on our phone bill, we paid a few cents on our phone bill, could we actually cover all these additional services as well? I throw that out, just as a throw-that-out thing. >> I was curious. I wanted to ask a question. There is an app called... I don't know how to pronounce this. AIPOLY. >> Yeah. In my view, it's something that people think are great for the blind. But the blind don't really see a use for it. I know there's a bottle sitting in front of me. I know there's three people sitting within the same space as me, and I don't need the (inaudible). However, I think it's like... It's a good start to play with computer vision. To where you can start experimenting with these things. And take it to the next step. So... You should take it to the next step. That you point to that door out there, and it can see the elevator from this far away. Tell me there's an elevator out there. If it sees it. Great. If not, I still know there's an elevator out there. (laughter) >> (inaudible) >> I just want to say quickly... It's amazing to me. So... Two years ago, how I met Cameron -- I was a judge for an assistive technology competition. And I learned a lot about technology that doesn't necessarily apply to me, as somebody with cerebral palsy. But one of the other fellows who was an exemplar was Gus. And Gus is blind. And, you know, there were so many apps that we thought were cool and he's like... I would never use that. So, for example... I don't remember the name of it, unfortunately. But a group created an app that would say... You know, Coke can or orange sock. He said... Do you know how messy my room is? It would take me hours to find my left orange sock if I were to actually use this. This is not efficient. It is not useful for anything practical in my life. So I find that really compelling, and something that I've taken with me past that experience. To realize... Wow. There's so many things that, when you're outside of that experience, that you just don't realize until you have those conversations. Before you live that experience. Anyway... Just wanted to say you're not alone in that. There's a lot of apps like that. Especially, I think, for those with blindness. Because it's one of the easiest things to try to develop for in terms of a technological background. Like, most of the things I would need are hardware. But I think a lot of it is a sort of false empathy, if you will. Yeah. So anyway... >> Yeah. It might be something else. Like you said. It's easy to program for. My son studied engineering. And he said... Everyone's putting optical sensors into everything. It's not necessarily the right solution for that thing. >> Okay. Thank you, everyone, again, for coming tonight. We're gonna wrap up. We have a couple more minutes in the space. We have to be out of here, doors locked, by 9:00. So hang out for a few minutes. Get to know people a little bit. And then I'll let you know when I'm gonna be kicking you out. Again, I wanted to thank SSB BART Group, Mirabai Knight and White Coat Captioning, Thoughtbot for the space, Joly with Internet Society, and all of you for being with us tonight. Go to our Meetup group. It's http://meetup.com/a11ynyc. We'll be announcing our follow-up Meetups there. We usually have one every month. First Tuesdays. And come back next time. Thanks again. (applause)