×

McNibbler's video: Subtitles for Real Life - C Smart Glasses Second Place Capstone Winners NEU 2020

@Subtitles for Real Life? - C² Smart Glasses (Second Place Capstone Winners NEU 2020)
This is our completed project for our EECE4790/4792 Undergraduate Capstone. Our team is group C2, and is comprised of myself - Thomas Kaunzinger - alongside Andrew Nedea, Mohammed Laota, Jullian Liang, and Lauren Javan. Special thanks to our capstone advisor Chuck DiMarzio. Abstract: The C² Smart Glasses were created in order to provide the user with a real-time transcription of the conversation they are having. The goal of this device is to make sure that those that are hard of hearing can be an active participant in any conversation. The flow of conversation can be difficult to follow due to the speed of conversation or if the other person doesn’t speak loudly or clearly enough. The smart glasses hope to ease some of these difficulties. In order to do so, the smart glasses device employs Bluetooth, optics, mechanical / electrical design, and Google’s transcription API. The main hardware components consist of a Raspberry Pi Zero, a rechargeable battery module, an organic light-emitting diode (OLED) display, and an omnidirectional microphone. The Raspberry Pi handles the basic interaction between the hardware and software components. It is loaded with software that ties together the OLED, Bluetooth, and microphone functionality. Upon boot, the raspberry pi will launch its software, and wait for a connection to an Android device to communicate with it and provide / receive audio data / transcriptions. The Pi uses I²S to read in audio samples from the MEMS microphone embedded within the device’s enclosure. These samples are relayed to the Android mobile application via the Bluetooth connection. The app uses WiFi or cellular connection to send the samples to Google transcription services. Once the transcription is complete, the app will send back the information to Pi using the Bluetooth connection. The Pi processes the packets and sends the data to the OLED screen, which will create the visual captions for the user by way of optics. This approach can easily be expanded to leverage a language translation API, which would further enhance the device. The subtitles are projected onto a semi-transparent acrylic mirror screen using basic optics. The OLED has the image of the subtitles and it is reflected onto a mirror at a 90° angle within the device. The image is then reflected onto another, now transparent, mirror, again at a 90° angle. This allows the image to be refracted onto the screen for the user. Between the mirrors, there is a small lens, which is used for both making the image from the small OLED screen much larger, but also projecting the image further in space from the user’s eye to prevent blurriness and eye strain.

9

5
McNibbler
Subscribers
1.3K
Total Post
94
Total Views
88.4K
Avg. Views
1.8K
View Profile
This video was published on 2020-12-10 03:20:22 GMT by @McNibbler on Youtube. McNibbler has total 1.3K subscribers on Youtube and has a total of 94 video.This video has received 9 Likes which are lower than the average likes that McNibbler gets . @McNibbler receives an average views of 1.8K per video on Youtube.This video has received 5 comments which are lower than the average comments that McNibbler gets . Overall the views for this video was lower than the average for the profile.

Other post by @McNibbler