“there’s two connected big concerning unknowns. The first is that we don’t really know what they’re doing in any deep sense. If we open up ChatGPT or a system like it and look inside, you just see millions of numbers flipping around a few hundred times a second, and we just have no idea what any of it means. With only the tiniest of exceptions, we can’t look inside these things and say, “Oh, here’s what concepts it’s using, here’s what kind of rules of reasoning it’s using. Here’s what it does and doesn’t know in any deep way.” We just don’t understand what’s going on here. We built it, we trained it, but we don’t know what it’s doing.”
“The other big unknown that’s connected to this is we don’t know how to steer these things or control them in any reliable way. We can kind of nudge them to do more of what we want, but the only way we can tell if our nudges worked is by just putting these systems out in the world and seeing what they do. We’re really just kind of steering these things almost completely through trial and error.”
“Sens. Josh Hawley (R–Mo.) and Richard Blumenthal (D–Conn.) want to strangle generative artificial intelligence (A.I.) infants like ChatGPT and Bard in their cribs. How? By stripping them of the protection of Section 230 of the 1996 Communications Decency Act, which reads, “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”
“Section 230 embodies that principle that we should all be responsible for our own actions and statements online, but generally not those of others,” explains the Electronic Frontier Foundation. “The law prevents most civil suits against users or services that are based on what others say.” By protecting free speech, Section 230 enables the proliferation and growth of online platforms like Facebook, Google, Twitter, and Yelp and allows them to function as robust open forums for the exchange of information and for debate, both civil and not. Section 230 also protects other online services ranging from dating apps like Tinder and Grindr to service recommendation sites like Tripadvisor and Healthgrades.
Does Section 230 shield new developing A.I. services like ChatGPT from civil lawsuits in much the same way that it has protected other online services? Jess Miers, legal advocacy counsel at the tech trade group the Chamber of Progress, makes a persuasive case that it does. Over at Techdirt, she notes that ChatGPT qualifies as an interactive computer service and is not a publisher or speaker. “Like Google Search, ChatGPT is entirely driven by third-party input. In other words, ChatGPT does not invent, create, or develop outputs absent any prompting from an information content provider (i.e. a user).”
One commenter at Techdirt asked what will happen “when ChatGPT designs buildings that fall down.” Properly answered: “The responsibility will be on the idiots who approved and built a faulty building designed by a chatbot.” That is roughly the situation of a couple of New York lawyers who recently filed a legal brief compiled by ChatGPT in which the language model “hallucinated” numerous nonexistent precedent cases. And just as he should, the presiding judge is holding them responsible and deciding what punishments they may deserve. (Their client might also be interested in pursuing a lawsuit for legal malpractice.)”
” A U.S. Air Force officer helping to spearhead the service’s work on artificial intelligence and machine learning says that a simulated test saw a drone attack its human controllers after deciding on its own that they were getting in the way of its mission. The anecdote, which sounds like it was pulled straight from the Terminator franchise, was shared as an example of the critical need to build trust when it comes to advanced autonomous weapon systems, something the Air Force has highlighted in the past. This also comes amid a broader surge in concerns about the potentially dangerous impacts of artificial intelligence and related technologies.”
“Chinese artificial intelligence (A.I.) researchers at the Beijing Academy of Artificial Intelligence (BAAI) unveiled Wu Dao 2.0, the world’s biggest natural language processing (NLP) model. And it’s a big deal.
NLP is a branch of A.I. research that aims to give computers the ability to understand text and spoken words and respond to them in much the same way human beings can.
Last year, the San Francisco–based nonprofit A.I. research laboratory OpenAI wowed the world when it released its GPT-3 (Generative Pre-trained Transformer 3) language model. GPT-3 is a 175 billion–parameter deep learning model trained on text datasets with hundreds of billions of words. A parameter is a calculation in a neural network that shapes the model’s data by assigning to each chunk a greater or lesser weighting, thus providing the neural network a learned perspective on the data.
Back in November, The New York Times reported that GPT-3 “generates tweets, pens poetry, summarizes emails, answers trivia questions, translates languages and even writes its own computer programs, all with very little prompting.” GPT-3, move on over. Wu Dao 2.0 is here.
Wu Dao 2.0 (Chinese for enlightenment) is ten times larger than GPT-3, using 1.75 trillion parameters to simulate conversational speech, write poems, understand pictures, and even generate recipes. In addition, as the South China Morning Post reports, Wu Dao 2.0 is multimodal, covering both Chinese and English with skills acquired by studying 4.9 terabytes of images and texts, including 1.2 terabytes each of Chinese and English texts.
“Wu Dao 2.0’s mulitmodal design affords it a range of skills, including the ability to perform natural language processing, text generation, image recognition, and image generation tasks,” reports VentureBeat. “It can write essays, poems, and couplets in traditional Chinese, as well as captioning images and creating nearly photorealistic artwork, given natural language descriptions.” In addition, Wu Dao 2.0 can predict the 3D structures of proteins, like DeepMind’s AlphaFold, and can also power “virtual idols.” Just recently, BAAI researchers unveiled Hua Zhibing, China’s first A.I.-powered virtual student”