Research at 東工大 💮

I spent a little over two months at Tokyo Tech as a Global Opportunities Scholar researching individual variances in speech when using voice‐recognition software.

Me and my research poster
Me, presenting my research poster at the final conference in August 2019.

Challenge

Despite having a relatively small landmass, Japan’s national language is divided into over 16 dialects. How does this impact individual use of voice‐recognition software?

Outcome

We performed a study on 30 native Japanese‐speaking individuals of varying demographics and determined some significant and insignificant features.

From June 5 until August 22, 2019, I spent my summer in Tokyo. I was selected as one of two undergraduates from UW to attend the Tokyo Institute of Technology’s Summer Engineering Research program.

I joined Dr. Hilofumi Yamamoto’s research lab, where he helped me design and conduct research combining linguistics and human‐centered design.

Findings

Our research generated three main findings. While we hypothesized that these findings would be confirmed, it was important for us to determine their significance under reproducible research settings.

There are clear differences between spoken and written Japanese lexicon

While some cases may sound obvious to speakers of Japanese, there are synonymous words that are used almost exclusively in writing or almost exclusively in speech. For example, the word “interesting” can be translated into: 面白い or 興味深い. However, 興味深い appeared exclusively in written Japanese in our study.

Word order is not strict in spoken Japanese

Japanese is generally considered to be an SOV language. However, we found that in many cases, the word order of sentences was shifted around to accomodate the speaker; this made it easier for speakers to rapidly form sentences and did not detract from listeners’ ability to comprehend.

People are not acutely aware of how their language changes across mediums

Perhaps one of the most interesting findings was that the participants didn’t notice how they words they used shifted when changing language mediums or politeness levels. While they were making some conscious efforts to change them, the end results were still different than how the participants expected.

“After seeing the data from the study, I am surprised at how impolite the ‘polite speech’ looks.” Study participant

So what?

As voice becomes used more often in max‐machine interfaces, understanding how people speak naturally is important to creating usable products. Further, I believe that people should not adjust their speech when speaking to computers so that we do not lose linguistic integrity. Many studies suggest that one’s native language influences their understanding and interpretation of the world. As such, I believe the preservation of language and the exploration of science through multilingual lenses are highly important.

Want to see more?

I presented my findings at the 121st meeting of the Information Processing Socity of Japan’s Special Interest Group on Computers and the Humanities (人文科学とコンピュータ研究会) on August 1. View the meeting program (Japanese) and read my report and poster below.

Back