Jump to content

Community Wishlist Survey 2021/Reading/Alt-Texts and Image Descriptions

From Meta, a Wikimedia project coordination wiki

Alt-Texts and Image Descriptions

  • Problem: There are many images without alt-texts and/or image descriptions for visually impaired people. Images without that are not accessible for blind people. Texts related to undescribed pictures aren’t fully accessible as well.
  • Who would benefit: Visually impaired people
  • Proposed solution: Add a field in media-uploader on Commons via structured data to raise awareness and to supply images with alt-texts and image descriptions.
  • More comments: Existing descriptions on Commons and Wikipedia f. e. are not descriptions as mentioned above.
  • Phabricator tickets: task T260006, task T21906, task T166094, task T213585
  • Proposer: Conny (talk) 05:52, 27 November 2020 (UTC)[reply]

Discussion

  • Good proposal. Also it would be helpful to explicitly describe what an alt text should be, as I suspect many people do not know (considering what I've seen in those instances where there is an alt text). On a related note, would it be possible to add default alt text automatically by a image recognition software (that can then be improved by humans)? --Ita140188 (talk) 04:24, 2 December 2020 (UTC)[reply]
  • If this is a proposal to centrally store alt text for images, to be used on projects, then I strongly oppose it, for the reasons given when it has been previously suggsted elsewhere, and rejected. At the very least, please do not progress this proposal without first consulting with screen reader users and/or accessibility professionals. Andy Mabbett (Pigsonthewing); Talk to Andy; Andy's edits 10:39, 10 December 2020 (UTC)[reply]
  • Yet another field? Why not just use the image's plain old description? --NaBUru38 (talk) 21:04, 10 December 2020 (UTC)[reply]
    These are two different things. Alt text is a substitute for the seeing: it describes things that many readers have already seen in the image. A description explains something above and beyond what you’re seeing, it serves both sighted and blind readers, and it may be useless unless you can see the image or read the alt text. (I am not an expert, but I’m sure I can provide an example or two if this is still unclear.) Michael Z. 2020-12-11 23:14 z 23:14, 11 December 2020 (UTC)[reply]
  • A proposal shall suggest that a particular problem needs to be solved, but it should refrain from requiring implementation details. It is up to research and broader considerations which approach is finally chosen to reach the target.
    An older plan for global storage of descriptive texts which will be delivered every time an image is transcluded on any page has been:
    <alternatetext lang="fr">…</alternatetext>
    <alternatetext lang="pt">…</alternatetext>
    Those would be located on description page, and best match for user language including finally fallback to English if present will be added to image presentation by the server. It is not limited to Commons and some Commons-only structured data, but will work on every wiki even out of WMF for local media as well.
    BTW, the technicalities about alt text and screen readers and accessibility started with HTML.4 in 1998, and were further developed by ARIA. It is quite clear what is to be delivered within the HTML document.
    --PerfektesChaos (talk) 16:42, 12 December 2020 (UTC)[reply]
  • I fully support the general idea of the proposal but it has some major flaws. First of all, this proposal would have huge impact on the Commons community's workload (manually adding structured text) and should therefore ask for their support at Commons:Requests and votes. Also, I agree with PerfektesChaos and Andy Mabbett. Chances are there is a different solution that serves people with a vision impairment better and faster. Any such solution needs to be based on actual user needs and their standard accessibility technology. I would rather support a community request to the Wikimedia Foundation asking they generally update their websites to meet current accessibility standards. --Martina Nolte (talk) 19:12, 12 December 2020 (UTC)[reply]
    Thank you for your general support. The Image Description is an optional additional information, so it should grow in a slow way. It would make sence to have new authors for reviewing these texts. Conny (talk) 10:59, 13 December 2020 (UTC).[reply]
  • As explained in the linked tasks, having a place in UploadWizard to enter alt text is part of the work needed to get it to users, but we'd also need a mechanism to transfer the data from Commons to the wikis using the image. I agree with PerfektesChaos that this proposal could use some more generic phrasing. --Tgr (talk) 00:26, 13 December 2020 (UTC)[reply]
  • What the proposal actually wanted to express:
    Implement a mechanism to deliver alternate texts with every image transclusion in user language that are stored and maintained centrally together with the image media, if the current transclusion does not provide an alt= parameter.
    However, the first proposal is too explicit and specific on the following keywords:
    Commons – what about images on local wikis? What about non-WMF MW installations?
    structured data – this is one particular storage mechanism, but quite limited to Commons and WMF at least. The proposal should not prejudice one solution only.
    Uploading – what about 100,000,000 images present on Commons and local wikis when roll out date of the new mechanism arrived? Really a feature for uploading future images only?
    The proposal sounds a bit as if everybody uploading a new image shall be forced to provide a description for blind people, even more as a mandatory requirement?
    ✦ Sounds like a further threshold to make uploading forms and procedure more difficult.
    ✦ It needs some understanding what visually impaired people need, and in opposite to a common legend. That was already raised in this section. The traditional legend is specifying which things are presented by the image. The alternate text is telling what is visible, and rather pointless for those who can see the image. Writers must have a deeper concept which information blind people require to imagine the image from text.
    ✦ The uploader could have taken a picture of the it:colosseo and provided an alternate text in mother tongue, Italian. Readers of a Wikipedia article in Japanese or Spanish might want to hear a concise description, not really a machine generated one, in their native language.
    ✦ However, if ever working current automatic translations are pretty smart and might create a version in user language on the fly. But this is step 2+x far off.
    There is already an implementation for the tag extension approach <alternatetext> running: TemplateData are creating a compressed JSON description of the <templatedata> element which is stored as page property of the template page. That might be a role model how <alternatetext> can be evaluated when saving an image description page and create a similar page property of the media. When delivering an image transclusion within a particular HTML page in a particular user language that page property may be evaluated by fallback cascade. If cache administration is complaining then page language is acceptable.
    ✎ Rather than delivering to everybody with all HTML pages, a page property could be evaluated and add alt text on client side to the document in user language by a MW gadget for those who have a preference for this feature. However, screenreaders are reluctant to execute JavaScript.
    Greetings --PerfektesChaos (talk) 18:29, 13 December 2020 (UTC)[reply]
    The "user language" part is debatable (everything else about the article, including the image caption, is in content language; why would the alt text be an exception? Architecturally, it wouldn't really fit into our current caching model); otherwise this is a good description. Using structured data is a no-brainer, but certainly doesn't have to be stated explicitly in the proposal. --Tgr (talk) 23:57, 13 December 2020 (UTC)[reply]
    I wrote “If cache administration is complaining”.
    Looking at my last point delivering later on client side for a very small minority rather than for everybody but ignored by >99% of visitors and consuming bandwidth, on client side the user language is known, and of course the description would be told in user language if available even when visiting an article in a different wiki but the environment around the article is in user language.
    How does “structured data” fit into local and non-WMF wikis?
    Greetings --PerfektesChaos (talk) 17:06, 14 December 2020 (UTC)[reply]
  • As a screen reader user, I think this is a good idea in theory, but I tend to agree with the concerns of Andy Mabbett, PerfektesChaos, etc. The correct alt text for an image (as opposed to its description) can depend on context as well. Graham87 (talk) 13:04, 16 December 2020 (UTC)[reply]
  • To echo want Graham87 says, alt text will often depend on context. When images are used in Wikipedia articles, they are intended to convey information relevant to the article; and it is the pieces of that information visible to a sighted reader (but not to a screen reader) that need to be conveyed by alt text. If we use the image File:Washington and Lafayette at Valley Forge.jpg in an article about Valley Forge, we probably intend to convey the winter conditions depicted, but the colour of the brown horse is immaterial, so the alt text describes the conditions and not the horse. If we use the image in an article about George Washington's horses, we will probably take the opposite course.
    I'm sorry, but I don't see how a central repository like Commons can anticipate every possible use of their images and be able to usefully suggest alt text for them. Writing good alt text is very often a bespoke job for editors at individual articles, not an automated process. --RexxS (talk) 21:26, 16 December 2020 (UTC)[reply]
    When I reformulated the proposal, I wrote: if the current transclusion does not provide an alt= parameter.
    If an individual text is provided with transclusion, that one will take precedence.
    If not and it happens that a text is available in a fallback language, the one from local or commons image description page is offered in current page.
    Looking at 100,000,000 images when the proposal has been implemented and perhaps 10,000 local alt as starting point (those might be collected and made available for all pages on all wikis) we are at 0,01 % of the images with one single language. The issue of context dependant description is probably the least problem.
    Greetings --PerfektesChaos (talk) 14:35, 18 December 2020 (UTC)[reply]
  • Please excuse me if this is a stupid question, but can you tell me, how many images do already have alt-texts? Or how many percent? --KH32 (talk) 12:02, 18 December 2020 (UTC)[reply]
    This is no stupid question at all.
    German Wikipedia has 2,500,000 articles, 500,000 with at least one image transclusion, about 4,000 with at least one non-empty alt parameter, most of them insufficient since stupidly repeating legend but no visual description; perhaps 1,000 or 2,000 meaningful alternate texts. IIRC enWP and others have no better ratio.
    One of the ideas is to collect all existing alternate texts by gadget, check them manually whether they are a visual description and not confusing, and populate the linked media description page (local or commons) with that entry for that language. Then they are available on all other pages and wikis where the same image is transcluded. Further descriptions and languages may be added later.
    Commons has 66,988,373 media files right now.
    Greetings --PerfektesChaos (talk) 14:35, 18 December 2020 (UTC)[reply]
  • Thank you very much. And on the other hand most most most images in articles get a caption there that is delivering context. So what would we loose if we would introduce alt-texts that mostly deliver a short image description, a translation of the visual content for blind users? The context would still be added article by article in the captions. This way we would avoid duplication and would allow automatic translation. --KH32 (talk) 12:00, 19 December 2020 (UTC)[reply]
  • PS I'm just about to expand the article on alt-texts in the German Wikipedia. Maybe someone here is on the same track? Maybe we could join forces then. In any case I would be happy for your response:-) --KH32 (talk) 12:08, 19 December 2020 (UTC)[reply]

Voting