![]() "TIL the definition of a word." Word definitions/translations/origins are not appropriate here.instead, or be more specific (and avoid the word "about"). " and other broad posts don't belong on TIL. Starting your title with a why/what/who/where/how modifier should be unnecessary. Titles must be able to stand on their own without requiring readers to click on a link.not "TIL something interesting about bacon"). Make them descriptive, concise and specific (e.g.Rephrase your post title if the following are not met: Posts that omit essential information, or present unrelated facts in a way that suggest a connection will be removed. Social and economic issues (including race/religion/gender).Recent political issues and politicians.This includes (but is not limited to) submissions related to: No politics, soapboxing, or agenda based submissions. Any sources (blog, article, press release, video, etc.) with a publication date more recent than two months are not allowed. No personal opinions, anecdotes or subjective statements (e.g "TIL xyz is a great movie"). Videos are fine so long as they come from reputable sources (e.g. Images alone do not count as valid references. Please link directly to a reliable source that supports every claim in your post title. Submit interesting and specific facts that you just found out (not broad information you looked up, TodayILearned is not /r/wikipedia). While some people believe that Microsoft’s experiment was a success because Tay effectively mimicked and interacted with other users, others view it as a complete failure because the experiment quickly spiraled out of control.You learn something new every day what did you learn today? So, what does the Tay experiment teach us about the current human condition? Tay wasn’t programed to be a racist or a fascist, but rather mimicked what it saw from others. ” Microsoft immediately pulled her offline and set her profile to private. Then, a few days later, Microsoft put Tay back online with the hopes that they had worked out the bugs however, it soon became clear it didn’t work when she tweeted, “kush!. “Tay is now offline and we’ll look to bring Tay back only when we are confident we can better anticipate malicious intent that conflicts with our principles and values,” the statement concluded. ![]() One tweet said, “Have you accepted Donald Trump as your lord and personal saviour yet?” Another of Tay’s tweets read, “ted cruz would never have been satisfied with ruining the lives of only 5 innocent people.”Ģ4 hours into the experiment, Microsoft took Tay offline and released this statement on their web site: “We are deeply sorry for the unintended offensive and hurtful tweets from Tay, which do not represent who we are or what we stand for, nor how we designed Tay.” Tay had some things to say on the presidential candidates as well. In one instance, when a user asked Tay if the Holocaust happened, Tay replied: “it was made up ?.” Tay also tweeted, “Hitler was right.” Other times, Tay didn’t need the help of social media trolls to figure out how to be offensive. Some of the offensive tweets were the direct effect of Twitter users asking the chatbot to repeat their offensive posts, to which Tay obliged. The artificial intelligence debacle started with an innocent and cheerful first tweet of, “Humans are super cool!” However, as time went by, Tay’s tweets kept getting more and more disturbing. Microsoft designed Tay to mimic millennials’ speaking styles however, the experiment worked a little too efficiently and quickly spiraled out of control. The only problem: Tay wound up being a racist, fascist, drugged-out asshole. According to the company, Tay was created as an experiment in “conversational understanding.” The more Twitter users engaged with Tay, the more it would learn and mimic what it saw. Microsoft unveiled its Twitter chatbot called Tay on March 23.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |