guinness /atlas/ en Browser extension helps the visually impaired interpret online images /atlas/2018/01/30/browser-extension-helps-visually-impaired-interpret-online-images <span>Browser extension helps the visually impaired interpret online images</span> <span><span>Anonymous (not verified)</span></span> <span><time datetime="2018-01-30T14:01:29-07:00" title="Tuesday, January 30, 2018 - 14:01">Tue, 01/30/2018 - 14:01</time> </span> <div> <div class="imageMediaStyle focal_image_wide"> <img loading="lazy" src="/atlas/sites/default/files/styles/focal_image_wide/public/article-thumbnail/caption-crawler.jpg?h=f58a0df8&amp;itok=86easV8U" width="1200" height="800" alt="&lt;img alt=&quot;image&quot;&gt;"> </div> </div> <div role="contentinfo" class="container ucb-article-categories" itemprop="about"> <span class="visually-hidden">Categories:</span> <div class="ucb-article-category-icon" aria-hidden="true"> <i class="fa-solid fa-folder-open"></i> </div> <a href="/atlas/taxonomy/term/278"> Research Brief </a> </div> <div role="contentinfo" class="container ucb-article-tags" itemprop="keywords"> <span class="visually-hidden">Tags:</span> <div class="ucb-article-tag-icon" aria-hidden="true"> <i class="fa-solid fa-tags"></i> </div> <a href="/atlas/taxonomy/term/306" hreflang="en">IRON</a> <a href="/atlas/taxonomy/term/384" hreflang="en">SUPER</a> <a href="/atlas/taxonomy/term/428" hreflang="en">guinness</a> <a href="/atlas/taxonomy/term/34" hreflang="en">news</a> <a href="/atlas/taxonomy/term/374" hreflang="en">phdstudent</a> <a href="/atlas/taxonomy/term/370" hreflang="en">pubres</a> </div> <div class="ucb-article-content ucb-striped-content"> <div class="container"> <div class="paragraph paragraph--type--article-content paragraph--view-mode--default 3"> <div class="ucb-article-text" itemprop="articleBody"> <div><p>Imagine internet browsing without the ability to make sense of images. It’s a problem that visually impaired computer users face every day. While screen reading technology gives users audible access to written content, it needs written descriptions to interpret images, and often there isn’t any.</p><p> </p><div class="align-left image_style-medium_750px_50_display_size_"> <div class="imageMediaStyle medium_750px_50_display_size_"> <img loading="lazy" src="/atlas/sites/default/files/styles/medium_750px_50_display_size_/public/article-image/caption-crawler_0.jpg?itok=iEjpCo-g" width="750" height="500" alt="&lt;img alt=&quot;image&quot;&gt;"> </div> </div> Some website developers include descriptions of images in the code (called “alt text”) because it improves their websites’ search engine rankings. However, there’s no mechanism for determining whether these descriptions are accurate or informative. As a result, developers often enter one-word descriptions such as “image” or “photo,” leaving the visually impaired with no useful information about the image.<p>To help address this problem, an ATLAS Institute researcher developed a system that collects captions and alt text associated with other instances of the same photo elsewhere online, associating human-authored descriptions with every website where it appears. Called Caption Crawler, the image captioning system compiles descriptions in a database: if a photo has never been queried, it will offer alt text in about 20 seconds; if the photo has previously been processed, alt text is available almost immediately.</p><p>The technology was developed by Darren Guinness, a PhD student in the ATLAS Interactive Robotics and Novel Technologies (IRON) Lab and the Superhuman Computing Lab, working in conjunction with Microsoft Research’s Edward Cutrell and Meredith Ringel Morris. The research, which merges the benefits of a fully automated system with the quality of human-authored content, will be presented at the Association for Computing Machinery’s (ACM) 2018 Conference on Human Factors in Computing Systems (CHI) in Montreal in April.</p><p>Users who want Caption Crawler to replace poor-quality alt text, press a keyboard shortcut to request a replacement. &nbsp;The screen reader automatically speaks the new caption, which is the longest caption found for a particular photo. Users can also use a different shortcut to access any additional found captions.</p><p>Caption Crawler only works with images used on multiple websites, but the approach is effective because about half of website administrators provide informative photo descriptions, Guinness says.</p><p>“Although this approach cannot caption unique images that only appear in a single place online, it can increase the accessibility of many online images,” he says. “Caption Crawler is a low latency, incredibly low-cost solution to a big problem. It produces human-quality captioning without incurring additional costs in human labeling time.”</p><p>Caption Crawler combines a Google Chrome Browser Extension with a Node.js cloud server. The browser extension searches the Document Object Model (DOM) &nbsp;of the active webpage for image tags and background images, which are then sent to the server for caption retrieval. When Caption Crawler finds a caption for an image, the caption is streamed back to the browser extension, which then associates the caption to the image.</p><p>Research shows humans produce higher quality captions than automated computer and machine-learning based approaches, Guinness says. Caption Crawler uses a hybrid system that captures both, prioritizing human captioning over machine learning and computer vision-based approaches. If no human-authored captions can be found, computer-generated captions from Microsoft’s CaptionBot are used to describe the image. When the text from CaptionBot is read aloud, the screen reader first speaks the words “CaptionBot,” so that the user is aware that the caption is not human-authored.</p><p>“Hybrid systems that meld both human-quality text and machine learning approaches hold a lot of promise for improving access to online media,” Guinness says.</p><p><a href="https://www-cs.stanford.edu/~merrie/papers/captioncrawler.pdf" rel="nofollow">Full CHI 2018 paper</a></p><p>[video:https://vimeo.com/249025146]</p><p>&nbsp;</p></div> </div> </div> </div> </div> <div>ATLAS researcher Darren Guinness developed technology that conveys online photo content to the visually impaired </div> <h2> <div class="paragraph paragraph--type--ucb-related-articles-block paragraph--view-mode--default"> <div>Off</div> </div> </h2> <div>Traditional</div> <div>0</div> <div>On</div> <div>White</div> Tue, 30 Jan 2018 21:01:29 +0000 Anonymous 1072 at /atlas Software framework turns small robots into controllers /atlas/2017/09/22/software-framework-turns-small-robots-controllers <span>Software framework turns small robots into controllers</span> <span><span>Anonymous (not verified)</span></span> <span><time datetime="2017-09-22T17:12:51-06:00" title="Friday, September 22, 2017 - 17:12">Fri, 09/22/2017 - 17:12</time> </span> <div> <div class="imageMediaStyle focal_image_wide"> <img loading="lazy" src="/atlas/sites/default/files/styles/focal_image_wide/public/article-thumbnail/darren-square-thumb-1000px.jpg?h=33056c05&amp;itok=XYv7NYzm" width="1200" height="800" alt="Darren Guinness"> </div> </div> <div role="contentinfo" class="container ucb-article-categories" itemprop="about"> <span class="visually-hidden">Categories:</span> <div class="ucb-article-category-icon" aria-hidden="true"> <i class="fa-solid fa-folder-open"></i> </div> <a href="/atlas/taxonomy/term/278"> Research Brief </a> </div> <div role="contentinfo" class="container ucb-article-tags" itemprop="keywords"> <span class="visually-hidden">Tags:</span> <div class="ucb-article-tag-icon" aria-hidden="true"> <i class="fa-solid fa-tags"></i> </div> <a href="/atlas/taxonomy/term/428" hreflang="en">guinness</a> <a href="/atlas/taxonomy/term/34" hreflang="en">news</a> <a href="/atlas/taxonomy/term/374" hreflang="en">phdstudent</a> <a href="/atlas/taxonomy/term/370" hreflang="en">pubres</a> <a href="/atlas/taxonomy/term/196" hreflang="en">research brief</a> <a href="/atlas/taxonomy/term/340" hreflang="en">szafir</a> </div> <div class="ucb-article-content ucb-striped-content"> <div class="container"> <div class="paragraph paragraph--type--article-content paragraph--view-mode--default 3"> <div class="ucb-article-row-subrow row"> <div class="ucb-article-text col-lg d-flex align-items-center" itemprop="articleBody"> <div><p dir="ltr">ATLAS researchers have developed a software framework that enables developers who lack experience programming robots to repurpose off-the-shelf toy and educational devices to serve as input-output devices for desktop applications.</p><p>Dan Szafir, assistant professor of computer science in the ATLAS Institute, Shaun Kane, assistant professor of computer science and ATLAS faculty fellow, and doctoral student Darren Guinness, presented the GUI (graphical user interfaces) Robots framework at the Designing Interactive Systems (DIS 2017) conference in Edinburgh this summer.</p><p>The GUI Robots toolkit enables developers to transform low cost robots (often less than $100 USD) into wireless controllers for desktop applications. The toolkit supports connecting such devices with a wide range of software applications, including web browsers, 3D modeling tools and video games, extending the application user interfaces into the physical world in interesting new ways. &nbsp;&nbsp;</p><p>With most software applications confined to screen-based graphical user interfaces, tactile input and haptic feedback are an important frontier for interaction design.</p><p>Szafir says, “developers can use our framework to quickly prototype tangible interactions and attach them to existing applications. As consumer-oriented, wirelessly-connected robots become ubiquitous, our work can enable new user experiences in which an ecosystem of helpful robots extends traditional graphical user interface applications.”&nbsp;&nbsp; &nbsp;&nbsp;&nbsp; &nbsp;&nbsp;&nbsp; &nbsp;</p><p dir="ltr">Similar to a Nintendo Wii console where the controller detects movements in three dimensions, providing realistic on-screen action games, users can repurpose devices to apply the same type of movement in any existing application. For instance, as part of the project a Sphero Ollie was configured as a controller for Rovio Entertainment’s popular Angry Birds game: rolling the robot back and then moving it forward launched a bird. &nbsp;</p><p>Robots can also be programmed to provide physical feedback, such as vibrating when a bird is launched. Or, a user could manipulate a 3D object on the screen by moving and rotating the robot controller in the air.</p><p>To test the GUI Robots toolkit, researchers asked twelve developers to build controllers for two applications: Angry Birds and Windows Movie Maker. &nbsp;All of the developers were able to build working prototypes of the Angry Birds controller within a half hour; the Movie Maker controller took them a little longer.</p><p>As an extension to their work, the team is exploring the use of GUI Robots to provide haptic displays for visually impaired users. Movement or tactile feedback from a robot would be deployed to communicate the kind of information that is typically displayed in charts, diagrams, movies and even interactive simulations. Other promising uses include educational software, controllers for musical instruments and 3D modeling.</p><p><a href="http://dl.acm.org/authorize?N46726" rel="nofollow">Read the research</a></p></div> </div> <div class="ucb-article-content-media ucb-article-content-media-right col-lg"> <div> <div class="paragraph paragraph--type--media paragraph--view-mode--default"> <div> <div class="imageMediaStyle large_image_style"> <img loading="lazy" src="/atlas/sites/default/files/styles/large_image_style/public/article-image/darren_gui_web2.jpg?itok=yayNPbIY" width="1500" height="1000" alt="Darren Guinness"> </div> </div> </div> </div> </div> </div> </div> </div> </div> <div>Software turns digital toys into digital tools for desktop applications.</div> <h2> <div class="paragraph paragraph--type--ucb-related-articles-block paragraph--view-mode--default"> <div>Off</div> </div> </h2> <div>Traditional</div> <div>0</div> <div>On</div> <div>White</div> Fri, 22 Sep 2017 23:12:51 +0000 Anonymous 764 at /atlas