I bet you don't think twice when you tap "start" to record you run on Strava.
Many of us able-bodied individuals take for granted that everything for us just… well, works. We can use our thumbs and our hands to navigate around the screen you’re reading from. You can get up and easily go to the fridge when you want a snack. And I bet you don’t think twice when you tap the play button to record your latest run on Strava.
But it’s those who can’t take all of this for granted—and the people who are amplifying their voices—that we have to thank for many of the small interactions we make with tech and the products around us to make our lives easier. Design is born from necessity.
Think about how much Siri or Alexa has become a helpful friend (perhaps rather controversially) in our lives today.
Or how easy it is to FaceTime your mum from the shower while trying to put on makeup and send off that last email (haha).
In my quest to become a better designer—and to design more equitably—I’m learning more about the various types of assistive tech and how to design more specifically for impaired users.
But I also have my own accessibility requirements. I use Speechify, a text-to-speech app that reads written content aloud using natural-sounding AI voices. It helps users with dyslexia, ADHD, or anyone who prefers listening over reading.

Text-to-speech was invented to help people with visual or reading impairments access written content by converting it into spoken words—but today, it’s likely that most people have used it in some form or another. I’m talking about you Audible listeners!
By analysing how I use this assistive tech in my own life, I can better understand and empathise with the value it brings to others—and the potential for designing solutions that not only help all, but help people who may have been excluded before.
Text-to-speech and voice tech are super helpful, but they’re not perfect. They can mispronounce words, struggle with accents, or sound robotic. Plus, there are privacy concerns, since voice data might be recorded, and they don’t always work equally well for everyone—especially people with non-standard speech or underrepresented languages.
I aim to always design with accessibility in mind, and by being mindful of the following, we can make sure what we create is accessible to text-to-speech and screen reader users:
Use semantic HTML so screen readers and TTS tools can interpret content correctly.
Write clear, concise copy that sounds natural when read aloud.
Structure content with headings and lists to help users navigate through audio.
Label buttons and links meaningfully — avoid vague labels like “Click here.”
Include alt text for images that adds context without being overly wordy.
Avoid relying solely on colour or visuals to convey important information.
Test your design with screen readers.
Allow user control over audio speed, volume, and playback if you're building in TTS features.
Design with accessibility standards in mind.
I think this is super important—but without sounding like a Medium article, I want to highlight that to be Mindful by Design, we need to be intentional about how we make tech empowering for all of humanity. And that means everyone—regardless of their needs.
Let's stay in touch!
↑