Artificial Intelligence (AI) has a lot to offer people with vision loss. Whether it's reading menus, describing pictures, or even narrating scenery, AI can make a big difference. This week we chat with Steven Scott, host of the Double Tap podcast, about some of the best AI-powered tools out there… so far. Link to Double Tap on Apple Podcasts.
Hadley
Artificial Intelligence (AI) and Vision Loss: Tools You Should Know
Presented by Ricky Enger
Ricky Enger: Artificial intelligence is everywhere, but how exactly do we access it? And are there specific benefits for people with vision loss? On this episode, podcaster Steven Scott joins us to discuss all things AI. I'm Ricky Enger and this is Hadley Presents. Welcome to the show, Steven. Welcome back, I should say we've had you on before.
Steven Scott: That's right. I must have done something right because you brought me back.
Ricky Enger: We must have done something right because we didn't scare you off the first time. So it's a good thing.
Steven Scott: Everything's worked out for the best.
Ricky Enger: Yes. So for people who may not know who you are, give just a brief intro, tell us who you are and a little bit about what you do when you're not on this particular podcast.
Steven Scott: Yeah, okay. My name's Steven Scott and I host a show called Double Tap, which is a daily tech show and ultimately, it's a show which each day picks up on the top tech stories, looks at them through the blindness perspective, but also we talk about the realities of life with blindness, right? We talk about our challenges, our daily lives, sometimes through the tech itself we talk about all the cool tech that's out there, and we think, "Yeah, okay, that's great, but is it affordable?"
Ricky Enger: Yeah, yeah.
Steven Scott: Those are the kinds of questions a lot of people have in our community, and we've asked them ourselves many times. So we're trying to build a community of people who can come forward and share your ideas. And one thing we really encourage on the show is feedback and conversation. I want people to engage. I'm all for as much conversation as possible and funnily enough, the one thing I get criticized for the most is encouraging people to have an honest conversation with each other. I think it's really important, especially in today's environment, I think it's more important now than ever.
Ricky Enger: Oh, definitely. So with that in mind, I think you're the perfect person to talk about today's topic, which is artificial intelligence, AI. It sounds so fancy and technical, and if you're not a techie person, it can sound a little bit intimidating just saying this phrase, artificial intelligence, "Oh that means I have to know a lot about computers or whatever. And this is not for me."
But I think with our discussion today, we're going to prove that wrong. At least that's my hope. Maybe it would help to start out not by talking about what AI is and how you access it or any of that, we'll get into that in just a bit. Maybe it would help to start with what are we actually doing with AI that didn't seem likely with other tools, say a year ago? Is there something that you're finding that you're using AI for every day?
Steven Scott: So on our show, we've talked a lot about this topic and it's a topic which gets a lot of controversy, which our show likes to promote. I put out this idea about alt text. So for those who don't know what that is, ultimately if you post an image online for someone who is blind, there is this feature called alt text. Most platforms have the option to add alt text, which means alternative text, it's short for alternative. And ultimately what it allows you to add a description of the image onto the image itself so that a blind person will know what is.
So let's say you post, you go to a coffee shop and you take a picture of a nice cup of coffee that you've just got and it's maybe got a fancy design of a leaf on it, and you want to post that onto your Instagram or onto your TikTok or whatever. And you post it up there, you can add alt text to say, "This is a lovely cup of coffee sitting on a wood table with a beautiful leaf design on the coffee foam itself." It just includes the blind person. It just says to the blind person, "Hey, you're included, and this is what's going on in this image." What often happens when we as blind people are using our devices and for example, use a screen reader and we swipe past an image, you get the dreaded image and that's all you hear, and you have no idea what it is.
I think it's great that we have alt text, and we certainly needed it for a long time, but artificial intelligence allows the ability now to for the computer to almost actually see the image itself. What it can do is describe the image for us. So I've been saying on the show, "Wouldn't it be cool if we stopped pushing people all the time to put alt text everywhere and take time out of their day to do that?" And actually we as blind people use the tools we have; we now have to actually make our own alt text or just get our own image described. That is one area that I think is really interesting and it's kind of growing and I see more and more of us using it.
The reason I love it is because if someone takes a picture of a beautiful sunrise or something, I can get so much more out of that image because the AI can describe it in much more detail than someone with sight ever would. On top of that, because it's artificial intelligence, think of it like a virtual person, that you can ask it questions. You could ask more questions and say, "Okay, tell me more about that tree you've just talked about. Do you know what kind of tree it is? Are there any animals in the image?" And you can query that and get information back. So suddenly that experience becomes much more 3-D than even sighted people get.
Ricky Enger: Yes. And the virtual person doesn't get tired of answering those questions either.
Steven Scott: No, that's right.
Ricky Enger: So one thing I've noticed that I suspect many of our audience knew already is if you've had sight before, the power of getting a picture without any preface, so someone's not saying, "Here comes this picture of a tree and here's why I find it impactful." You have that moment of experiencing it. It used to be with your eyes and maybe you're not seeing that so well anymore, but I'm able, for example, to get pictures from my friends and they would've described them previously and been happy to, but the difference is that now I can suddenly get a picture from a friend and she doesn't have to tell me, this is a funny picture because I already know that when the AI says it's a picture of a cat on top of a bookshelf and its front paws are hanging over the bookshelf. Clearly, now I know why the picture is funny without having that experience conveyed to me by the person who has sent it along.
So, I'm having a lot of fun with getting these pictures just randomly from friends or family and feeling like I can share in that moment of discovery and then I can share with them this is the AI description that I just got of this picture. We ended up having a really good conversation about, "Oh, it saw something that I didn't notice." Or "The AI description did miss one important aspect, which is this." But regardless, it does open up conversations that I wasn't having before.
Steven Scott: And there's so many examples of this. I had a listener get in touch who had an image sent to her in an email form of her as a child on her brother's knee. Her brother was no longer with her, and she had this image of herself. She had never seen it, she has no vision, never had any vision, and she wanted to know about the image. She had no idea, for example, that she wore a yellow dress in the picture because she was a child. She was very young, so she didn't know anything about this. She was able to query that. She was able to find out about what she was wearing, what the background was like, what the environment around in the image was like, and it was just amazing the detail she got.
Now, I'm not against alt text, I'm not against people taking their time to write. This is a cup of coffee with the design of whatever it is, the leaf on the cup. But the point is we can get so much more information from tools available to us through artificial intelligence. That virtual person approach gives us the access and gives us, I think more importantly, autonomy in all this, some control in this. So it's not just a case of being fed information, we can actually control that information, we can ask questions, we can gauge the information we want to know what kind of dress is it in that picture, what kind of hat was I wearing? What color are my eyes? There's probably loads of questions blind people have about themselves that they've never really thought to have answered, and they can get that information now.
As blind people, we can often feel like a burden. We can often feel like a burden on other people and some of us just accept that and we just do it. Some of us shy away from a lot of social environments for that reason and certainly for asking questions. When I go to a restaurant, I hate asking the waiter or waitress to read the menu. I can use artificial intelligence to scan that menu and actually query that image in the same way. I can query the image of the cup of coffee. I can query a menu to say, "Tell me about the dishes on here that have chicken in them or tell me about the desserts." That stuff matters.
Ricky Enger: Yes, this is such a great example, especially if you're new to this and you're already struggling with how to do the things I used to do without being a burden on my family or my friends. Then here we are at the restaurant and I had a family member that every time we would go out to a restaurant and I would say, "Okay, well what do they have?" "Oh, well it's pretty standard fare." What does that even mean? I have nothing to go on. So not even reading the mains or the categories. So I know people are dealing with this and having that ability with artificial intelligence to say, "Hey, I want chocolate. Tell me about that."
Steven Scott: Yeah. And the image thing is interesting as well because we often think about the image description angle from the perspective of sitting at home, someone sends us an image or we're on social media and we find an image and we want to investigate it. For example, recently it was a beautiful day and my wife and I had gone to the beach. We'd taken the dogs and where we were sitting, a little picnic table overlooking the beach, overlooking the water, it's absolutely gorgeous.
I just bring my phone out and I snap an image, and I throw it into Be My AI. Suddenly I have this rich description of my environment and I'm getting so much more than I can even imagine. I'm seeing in front of me, I've got a little bit of vision, but not enough to really gauge too much of what's going on. At the beach I'm able to know that there's an ice cream stand, there's loads more dogs than I thought there were, and this is amazing. I'm getting all this information, and I can query, I can say, "Tell me more about the ice cream stand or the prices on it." And it actually told me the prices and I'm thinking, "This is amazing. This is so cool."
This is stuff that enables us to be part of our conversations. If you're sitting, and a lot of you will know this, if you have a sighted partner, and it's hard to say this, but it's true, we do feel second best sometimes. We certainly feel second class. It's not the fault of anyone. It's not the fault of your partner. It's not intended by your partner to make you feel that way at all. But we just do because we rely on them to give us information. When you can provide that information, for example, "Hey, the ice cream stand has a sale on, you can get two ice creams." Suddenly you are in possession of information, and they go, "Oh wow, okay, cool, let's do it."
Suddenly you're part of the conversation. You're no longer just picking up the information as you go or listening intently because the other part of being blind as we know, is it's incredibly overloading on our senses, on our remaining senses because everything is active. Even though people often say our hearing is better because we're blind, no, it's because we are listening harder. We're using the sense and we're using our sense of touch and our sense of smell. And we use all of that combined to navigate to get our way around. We are using three senses just to accommodate one. So it is overloading and if we can use tools, we can use technology. If artificial intelligence can just help a little bit amongst all that and actually make us enjoy our environments, what's not to love?
Ricky Enger: Absolutely. For me anyway, AI is giving me access to things I didn't know, I didn't know. So maybe I thought before I had a pretty good idea of the things around me, but just like your example with the ice cream stand, I didn't know there was an ice cream stand on the beach. I didn't even know that was a possibility. So I wouldn't have thought to ask. But with AI giving us those descriptions, suddenly it's amazing at just how much more detailed the world is. Again, with people who knew this, maybe you are listening and you've been sighted for a long time and suddenly you're not seeing things as well as you used to and you're missing those details. This is a way to get those things back.
Before we talk about how we are accessing AI and just giving some tools that we're using, is there anything that you have been really surprised by in any way that you are using AI that is maybe not so obvious?
Steven Scott: We did a feature on the show about an app from Honda, the car company. And it was interesting because you would never really put a car company next to blind people unless you're talking about driverless cars perhaps. But Honda had come up with this idea for an app that allowed people to use the latest in artificial intelligence. And the same system that we're talking about when it comes to describing images, the app would take a series of images from your phone. So essentially as you're driving along a road, you hold your phone up out of the window and the phone will take a number of pictures, snap, snap, snap, you're unaware of it. It's just doing that in the background, and it is then stitching those images together and building a picture and then relaying that back to you in poetic language. So it's telling you what you're seeing as you're going along.
Suddenly you're in a position where something may happen on the road or be happening passing by or whatever it might be, and suddenly you are aware of what's going on. The point is that this is the kind of stuff which I think is great for inclusion and bringing us together. I don't want to just sit in the car or sit on the coach or sit on the train and just be unaware of my world. I want to be part of it, but also part of it in my own way, on my own terms.
Ricky Enger: Yeah, you can say, "I don't care about the signs, I care about the scenery or vice versa." You have that choice.
Steven Scott: That's right. And then it goes one step further because the next big iteration of what is called GPT and that this is, it all gets very technical at this point, but it is complicated stuff, right? I mean, it's not easy to get your head around, but ultimately if we just keep in the image description approach, let's take it to the next step. The next thing that's coming is video, live video. So what will happen is the camera will be able to take live video and then respond to that. And I've seen some amazing examples, and I think this is where things get really exciting.
So one example recently was from Be My Eyes, and this is the volunteer-driven app of course that has an AI component to it as well. They are about to release this at some point in the future where it will be able to use the camera to allow you or allow the app, I guess, to see out of the camera and then respond to you in real time. And the examples given include ducks playing in a pond. And you query, you ask the question, "Hey, what are the ducks doing?" And it will give you an audio description of everything that's going on.
Ricky Enger: It's happening in real time, right?
Steven Scott: In real time. So if the duck puts its head under the water, it's going to tell you if the duck brings its head back out of the water, it's going to tell you and it's going to do all with a beautiful voice and it's going to tell it to you like a story.
My favorite thing from Be My Eyes, which they demoed in a video, was hailing a taxi. So the guy in the video, his name's Andy and he’s in London, England. He puts his hand out for one of the traditional black London cabs, and he is asking the app, "Where is the next taxi?" Now the taxis are evident to people, but they're yellow lights, which of course we can't see. The app is able to say, "Oh, there's a taxi coming, put your hand out. And the taxi's approaching you, the door is ahead of you." And the guy who had a guide dog opens the door of the taxi, gets in, and the camera must have somehow been pointing at the ground because his guide dog would've gone in front of the camera essentially. And you hear the voice saying, "Oh, beautiful dog."
Ricky Enger: Yes, I love that. And we'll have a link to that video in the show notes because it's such a great example of what is coming. And that's the act of hailing a taxi. What a great practical example of how AI can be useful.
Imagine, I don't know how many of you have gotten something new and you're trying to figure out how it works or how to put it together. So you go to YouTube, and you search for the video, and it starts playing this lovely music and it keeps playing the lovely music and there's more lovely music and it says nothing at all. And so it's music with no description. Imagine using AI to then get access to not having to feel that frustration of, "Well, I guess I'm not doing this right now. I guess I need someone else's assistance to figure out what is in this video." If it's 3:00 in the morning and no one else is awake and this is the thing that you want to do and the only thing standing in your way is having knowledge of what is on this screen, AI will be able to give you that information, which is amazing.
Steven Scott: And I just want to say as well, I know that when we talk about artificial intelligence in a professional capacity or workplace, we often talk about it in the sense of summarizing an email or perhaps even composing an email. So it can be really useful if you are someone who struggles to write down thoughts to compile it. I must admit, I'm bad at this. I'm really bad at trying to write emails and being able to use AI to take what I've written, which is very rough notes and say, "Can you turn this into something legible?"
Ricky Enger: Yes.
Steven Scott: Presentable with good grammar and good spelling. That really matters, especially the spelling and the grammar. Because look, I have to be honest, since losing more vision, I really am nervous when I'm sending a professional email these days because I'm often thinking, "Is this right?" And I'll sometimes send it to a friend first and they'll say, "Yeah, your grammar was a bit off, or you missed a few capital letters." I'm thinking, "Really? I should have got all that right." And you spend so much time, so much time going through things and letter by letter and character by character using braille or using speech or whatever it might be. Being able to throw this to essentially the virtual person again and say, "Can you check this? Can you rewrite it if necessary, or can you correct my grammar and spelling?" Then it sends it back to you and says, "Here you go." It just gives you confidence in what you're doing. And again, we're doing this without being a burden on anyone else. It's great.
Ricky Enger: Yeah, it's a beautiful, beautiful thing. So not everything is perfect, and we are going to talk about some things to watch out for with AI. But before we jump into that, why don't you give two or three examples of tools that you are using. I have a couple of things in mind as well, but what AI tools are you using? Are you primarily doing this from your phone or a wearable or both? And what apps?
Steven Scott: So I guess it's three for me, which is Be My Eyes or Be My AI through Be My Eyes, which is an app I definitely use a lot. It's also available as a Windows desktop app as well now, which is really useful. It allows you to take screenshots and query and do all the things we've talked about.
In terms of wearables, I'm using my Meta Ray-Ban Glasses. Now I've had mixed responses with this because in the UK we actually don't have official access to the AI component of these glasses. So we have limited assistant functionality, but if you're in the states, you will get full access to it because it's open there. I believe in Canada as well. You can get what's called Meta AI. Now it's a little bit limited, it's not perfect, but it can do things like look and describe, so you can query what's in front of you.
I'll give you an example. Recently we were on a cruise and we were standing on the outside just looking out at the water, and there were two cruise ships ahead of us, and I wondered where these ships are coming in from. So I asked my Meta Ray-Ban glasses, "Okay, can you describe the scene for me?" And it explains, "That here we have this dock and there are two cruise ships." And that's all it said. And I thought, "Okay." I was again able to query it and say, "Can you tell me where the cruise ships are from?" And it was able to tell me that one was from Norway, and one was from Holland America ship. So I was able to get that information just from the glasses, which is amazing.
I mean, that's on top of all the usual things you can do with these smart assistants, like ask the time or check the date and all the things we've kind of become used to. Making calls as well. I mean, it's more than just that. You can make calls, you can do WhatsApp calls, you can even connect to an Aira agent if you have an Aira account, which allows you access to visual interpreters. So there's lots of cool things that they do.
I think the other thing that kind of surprised me a little bit because I didn't expect much from it weirdly, and that was Microsoft Copilot. So now, Microsoft Copilot is part of Windows 11. If you have Windows 11 on a computer, you may have heard that as you've been navigating around or maybe someone's been talking to you about it. And certainly if you've been following anything in the tech press recently, you'll have heard Copilot. Ultimately what it is it's exactly as it says, it's the assistant built into Windows 11. In my day it was called Clippy. Today it's called Copilot. Essentially it is quite good because it's very easy to navigate with a computer screen reader. I can ask questions; I can submit images to it. I mean, again, a lot of the things we've talked about, and I often call it many doors to the same room. Ultimately, the tool you're essentially using is the same one, but we're just using different devices and different ways to access it and query it. But it's essentially the same place.
Ricky Enger: Right, exactly. Go figure, my tools are very similar to yours. I do have the Meta glasses, and I also had purchased the Envision glasses well before this, and those have AI on them as well. The difference between Meta and those is that Envision was designed specifically for people who are blind or low vision. So, there are things that it may do with a bit more thought toward that than just the standard Meta AI where the AI and the glasses were essentially meant to help you to caption a photo to post to Instagram. Or to let you live stream and then use the AI to sort of write about what you're streaming. It just happens to be useful for us as well. So the Envision glasses is one, Microsoft seeing AI has an AI component as well. It is in the name, but they have recently added a bit more to it as far as describing things in a room, describing a scene, things like that.
There is an app called PiccyBot, P-I-C-C-Y bot. And this is kind of an interesting one. It's a blend of an AI personality. So they all have their own voices and they're using certain phrases to make the description a bit more poetic or a bit more bubbly or what have you. And you can describe a photo or a video.
So PiccyBot is one, and the one thing I've been using that we haven't mentioned just yet is it is still Be My Eyes, I'm using it to do some shopping. So I'm able to share a product page from Amazon, for example, and get a description of this product, which is really helpful if you're trying to determine the color of something or get an idea of its design. So those are my tools, but sometimes no matter how good the tool is, there's the possibility that it can get it wrong. Have you had any instances where the AI will very confidently tell you something, which turns out to be absolutely not true?
Steven Scott: Yes, and it was very disappointing, Ricky. I'll tell you; this was a disappointing day for me because I had my heart set on something from a menu, which I had used AI to essentially scan and then give me information about. I asked it to read through the desserts and it told me about cheesecake. That's what I want. I want traditional cheesecake. I said, "Great." So when the waitress came over, I said, "I'd love the cheesecake, please." And she said, "We don't have cheesecake on the menu." And I'm like, "Well, according to this, you do." And she said, "No, definitely not." And it turned out it was a cheese board that it had, but it made up.
Ricky Enger: It just made it up.
Steven Scott: I don't know, it must have seen cheese and said, "Hey, let's just say it's cheesecake." And that's part of the problem at the moment. And it is a very interesting dilemma because I've seen people say, "Well, look, if something is so inaccurate, why would you trust it at all? Because it could just be feeding you any old information."
But the truth is that this is an ongoing process that is being fixed. Bear in mind, some of the models that are being used to give us this information have only really been around for about a year or so. I mean, AI has been around for decades, but the kind of models we're using are fairly new and certainly fairly new to most people. So there's a lot of learning to be done. We're at the very early stages. And of course, this is the worst it will ever be.
There are examples and some really bad examples of things like when you mentioned you're blind in a conversation with an AI and it apologizes to you. And actually that gets to the heart of the problem. The information inside AI. I mean, this is not some sci-fi contraption dreamt up in a lab. This is information that people have created and put on the internet being soaked up by these systems. These are what they call large language models, which are essentially huge brains that are constantly just reading every single page of the internet. So if there's lots of information online about blind people being poor and needing charity and lots of opinions, negative opinions, ableist opinions, all that stuff, then that is what it's getting.
Ricky Enger: It inherits those biases. Yeah.
Steven Scott: Exactly. And that of course will apply to race, it will apply to gender, it will apply to sexuality, it'll apply to everything. And that is a challenge for the generations to come that are building this technology, the companies as well that are building this technology. How do you engineer all of that out of that system? There's no easy answer to that one.
Ricky Enger: And what you said was so spot on about why you would trust it if it's going to get things wrong some of the time. I think that's an important conversation to have because it can get things wrong. But having said that, it can get things right as well. And maybe the takeaway is that yes, AI can sometimes get things wrong, but if you have some other method of verifying the information that you've gotten when it's important, then this is a useful tool.
I was going through products in my cabinet and one of them said it was facial lotion and it very much was not. I could tell by the smell of it, this is actually a cleaning chemical, and I wouldn't want to put it on my face. But the point is that I got the AI to tell me something, and then I used some other things that I know to verify that before I put it on my face. So AI is not the answer to everything, at least I don't think so just yet. But it is an incredible tool to put in the toolbox alongside other things that we're already doing.
Steven Scott: I often tell my co-host, Sean, when we talk about these issues or we talk about these challenges, he will say, "Oh, we can't use this app because it's not perfect yet and we can't use this because it's not..." I'm often saying to him, "When an app comes out or a piece of hardware comes out, no one's suggesting that you empty your home of every other gadget and gizmo you have. Or if it's an app, delete every other app on your phone and only use this one." It is exactly what you've said. It is a tool in the toolbox.
Now, the great thing, if you're looking to dip your toe into the world of artificial intelligence, you're a little bit nervous about it and you're concerned about things like misinformation or getting things wrong. This is where tools like Be My Eyes and Aira Access AI are really good. Now, these are free services. Aira Access AI is free, as is Be My AI. The great thing is that you can upload an image through Be My AI, and you can have it queried, but then if you're unsure, you can connect with a volunteer and that volunteer will check that image for you and say, "Yes, that's correct. That is exactly what you think it is." Or "Is the right color?" Or whatever it might be. And the same with Access AI, the difference being you're dealing with trained agents.
So I think it's really useful to know that not only are there tools out there that are great to get you into the world of artificial intelligence that get you using it, but you've got that backup in place as well. So, if you’re unsure and you've no one else around to ask at 3:00 o'clock in the morning, you can get that support from a volunteer.
I do love Be My Eyes with the volunteer factor, and I love Aira as well. But I've got to say, I think it's because there's such a sense of happiness from the people who connect with us on Be My Eyes. They're so pleased to help and I love that. I love that. And they're so willing to help.
A story before AI was, I'd gone into a store, and I had bought a card for my wife the previous year for her birthday. When I got home, she said, "It's a lovely card. It just happens to say happy anniversary on it." And I said, "Okay, that's not great." And it was because I was being that kind of stubborn guy, "I'll do this myself. No one's going to help me. I don't need help."
The next year when I'd gone back to get another birthday card, I brought Be My Eyes Along and they pointed me to this beautiful card, which was really nice. They spent ages with me going through the card. And honestly, I actually got emotional at the end of it because I thought, "I've never had that before and I've never had it on my terms." Because a salesperson, yeah, okay, they'll help, but they don't quite connect in that way. This person knew what the role here was. So AI can add a lot to that and can do a lot with that. But let's not forget the human aspect as well. And it's obviously there for backup.
Ricky Enger: Wow. What a beautiful way to sum that up. I love it. We do, by the way, have a podcast with Mike Buckley of Be My Eyes. So if you're listening and thinking, "Hmm, sounds like a neat app, I should figure out what that is." We'll have that in the show notes along with all the other tools and videos and things that we've mentioned here and that has been a lot. I knew that you and I would have a fantastic time, Steven. Talking for just this length of time is difficult because I know that we could carry on. But thank you so much for joining us and sharing your thoughts about this. Very cool.
Steven Scott: Listen, anytime. Thank you for coming on. And I've got to say, Hadley is such an amazing, amazing organization and the work you do to support blind people, people who are coming to sight loss, you do such amazing things, and I've learned a lot from this organization. So thank you for allowing me to be a small part of it.
Ricky Enger: Yes, we appreciate that. Thank you.
Got something to say. Share your thoughts about this episode of Hadley Presents or make suggestions for future episodes. We'd love to hear from you. Send us an email at [email protected]. That's P-O-D-C-A-S-T at HadleyHelps dot O-R-G. Or leave us a message at 847-784-2870. Thanks for listening.
Join us as we chat with author Hannah Fairbairn about the tips and tricks she has learned to take some of the stress out of holiday get-togethers, no matter your vision.
We're joined by the creator of The Blind Life YouTube channel, Sam Seavey. Sam shares his personal journey with vision loss and advice he has for people who are newer to vision loss.
Whether you like to read for enjoyment or need to check your mail, reading is an essential part of your day. We're sharing tips and tricks for how to continue reading, the best low-tech and high-tech gadgets, and the benefits of learning braille.
Chief Innovation Officer Doug Walker chats with us about the launch of Hadley's newest podcast, Insights & Sound Bites. This new podcast will offer short stories shared by listeners. By tapping into the power of our community, we hope to share ideas, discoveries, and moments of inspiration along the journey through vision loss.
Jim Hoxie and Joanna Jones join us to discuss their children’s book, "Grandpa's White Cane." Jim shares how vision loss shaped his life and how he and Joanna, a retired teacher, began instructing children about the importance of white cane awareness and the do's and don'ts for helping people with visual impairment.
Blogger and social worker Jeff Flodin talks about his personal journey with vision loss and how his passion for helping people led him to blog about his experiences.
Hadley has partnered with the National Eye Institute (NEI) to offer a Spanish-language version of our popular cooking workshop series. Devina Fan, director of the National Eye Health Education Program at NEI, joins the podcast to talk more about this new initiative, NEI’s expanding Spanish content, and the importance of connecting Hispanic and Latino communities to important vision resources.
A change in your vision may make some parts of your job more challenging. But with a bit of help and some new skills, you may be able to stay in your job. Hadley Chief Program Officer Ed Haines and Learning Expert Steve Kelley join the podcast to talk about our new Working with Vision Loss workshops and to share tips for where to find support and how to ask for what you need.
Certified accessible travel advocate Melvin Reynolds joins the podcast to share tips for getting the most out of traveling, no matter your level of vision. Melvin gives advice on what to research ahead of a trip, considerations for traveling with a guide dog, and how a certified accessible travel advocate can help.
Karen and Dan Leonetti share how vision loss has changed their relationship and the advice they have for other couples.
Rabbi Lenny Sarko joins us to talk about how his vision loss journey led him to create a first-of-its-kind braille Sefer Torah that people around the country can access.
Actor and artist Bruce Horak talks about his personal journey with vision loss, how he got interested in painting, and his role in the new television series Star Trek: Strange New Worlds.