-Ajita Devkota

 

 In this era, technology has become our heart and brain. We live with technology, we grow with technology, and we shape our perspectives based on what digital media presents to us. Technology has granted us freedom, agency, and the power to acquire knowledge and reach wider audiences. Indeed, technology has become a basic human right for every individual. But has this right been practised equally by everyone? Does the digital world treat everyone with the same level of accessibility and safety? Do individuals from historically marginalised communities, including people with disabilities, have equal access to the digital space?

Even though digital rights are a basic human right, they are not experienced equally by everyone. For many persons with disabilities, even the simplest digital tasks can become struggles due to inaccessible interfaces, unfamiliar tools, or the fear of online hostility. Those with multiple marginalised identities face even greater barriers, often feeling unwelcome or unsafe in online spaces. As AI becomes part of everything we use, it introduces new challenges, with its biases and assumptions often repeating the same prejudices we encounter offline.

Since childhood, independence has fascinated me, and I have always wanted to exercise it. However, my visual disability often compelled me to rely on others. I used to think this world was meant only for those who could see. Watching others busy with their mobile phones made me worry about my possible inaccessibility to technology and devices, and about a future without technology, deprived of opportunities and access to knowledge. That was until I discovered screen reader software.

In 11th grade, I got my own phone, and it was then that I began leaving my digital footprint, without considering the benefits and risks, as any teenager would. When I opened my Facebook account, like all users, I received many friend requests, some from friends, family, relatives, and acquaintances, but also many from strangers. At that time, having a large number of Facebook friends was considered a mark of popularity and prestige, so I joined that race.

An illustration of a diverse group of nine people gathered together, set against a background featuring a faint, stylized circuit board pattern. Three individuals in the foreground are seated in wheelchairs, while others stand behind and around them. The group is racially diverse and shown using various digital devices, including smartphones and tablets; some are also wearing headsets or earpieces.

While indulging in social media, however, I began receiving unsolicited sexual videos and texts, and sometimes I was even added to groups where adult content was shared. Reliving those moments still shakes me, as I realise how easily minors become targets of online violence. The most painful part is that they often cannot even recognise whether what they are experiencing is harassment or normal behaviour. From my own experience, I know that many teenagers struggle with this. And when your online experience is compounded by intersecting marginalised identities, such as being a woman with a disability, resisting harassment often leads to victim-blaming. Instead of questioning the perpetrator, society questions you: “Why are you, a woman with a disability, on social media? You should have protected yourself by staying away from these platforms”. These harsh remarks, judgments, and the normalisation of online harassment force individuals with multiple marginalised identities into silence, making them either bear the pain quietly or withdraw from online spaces altogether.

As a person with a visual disability, I interact with technology even more frequently than others.

Honestly, technological advancements have given me the freedom and independence I had always dreamed of as a child. They allow me to access content, even printed material, that I once thought was impossible. Yet I am equally afraid of being vulnerable to data breaches and identity theft. When I upload sensitive personal information, such as my citizenship, passport, photos, and other private details, the platforms store that data. It is beyond my imagination where, when, how, and by whom this information might be used. That is just one drop in the ocean. When digital platforms are trained on biased data and algorithms, individuals from marginalised communities are more likely to be deprived of reliable and unbiased knowledge.

From my own experience, I often use Be My Eyes to describe photos and other images. Once, I uploaded a picture of my siblings and asked the app who looked younger. It responded that the person wearing sunglasses and modern attire looked younger, though it included a disclaimer of uncertainty. This shows how AI often equates being stylish, modern, and youthful with Western features such as fairer skin, brown hair, and certain clothing styles. But who gets misled by these biases? Of course, people like me, the visually impaired, who rely on these descriptions.

Another example is when I was using Grok, X AI, and asked how I could get into a prestigious U.S. university as a visually impaired student from Nepal. It assumed I might not be proficient in English, that my education might be of lower quality, and that I might not have the financial means to study abroad. It suggested that I should look for financial aid to gain admission. It was only after I confronted the platform about its bias and assumptions that it apologised.

Since AI has become an inseparable part of our lives, if it continues to show bias and prejudice toward certain communities, races, and genders, it will deprive all users, including people with disabilities, of impartial and accurate information. Weak data protection systems make historically marginalised communities particularly vulnerable to online harassment, violence, and identity theft. It is the responsibility of tech companies, engineers, and authorities building these platforms to ensure that the information they present is accurate and unbiased. At the same time, users must also take responsibility to cross-check and verify important information, rather than relying solely on what AI or digital platforms present.

Photo of Ajita standing

About Author

Ajita is an undergraduate law student, disability rights activist, and intersectional feminist dedicated to fostering an egalitarian world. As an Individual Grantee of the Strengthening Feminist Movements 2025 grant from Women’s Fund Asia, she founded सामंजस्य [Samanjasya] in Nepal, promoting political literacy and leadership among young women with disabilities. She is also a co-founder of Project Maitri, which provides inclusive sexuality education and legal literacy to children with disabilities, and co-initiated Workspace for All to enhance workplace accessibility. Additionally, Ajita has worked as a consultant with Plan International’s UN office in Geneva, contributing to global feminist initiatives. In 2024, she served on the U.S. Embassy Youth Council Nepal, implementing the civic engagement project Suchit to empower youth against misinformation. Her passion lies in advancing gender equality and disability rights, ensuring meaningful participation for women and girls with disabilities in decision-making, and believing firmly that everyone deserves an egalitarian world, regardless of identity.