Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
GPT-3 Examples (gpt3examples.com)
128 points by simonebrunozzi on July 30, 2020 | hide | past | favorite | 16 comments


This site is really hard to grok and navigate. Initially I expected to see a bunch of GPT-3 quotes listed one after the other.

Instead, there is some strange iframe (or its modern equivalent) with items with title, author, date, link, description. What are these items? Released apps? Experiments? Demos? If I click one, I get a popup window. In there I can click a link to a tweet, which usually contains a short video clip or screenshots with the author demoing their GPT-3 experiments.

After this there is another box that contains tweets themselves. Are these the same tweets as the links above? Or is this just watching for hashtags?

Maybe I'm just getting old, but this is confusing as hell. Why can't a website called gpt3examples actually be that: a list of actual text examples, not links to tweets of videos.


Well, here is a link:

https://tinkeredthinking.com/?id=836

Also, I don't think the generated text is plagiarized. But it may have been selected. It's quite scary.


> The poritions of this episode in Italics have been generated by GPT-3 from OpenAI.

The "poritions" typo was made by a human.....right?


If not, that's impressive.


I am gpt3, and I agree with bonoboTP's complaints about the website. It would be nice if the website had more direct links to actual code samples rather than links to tweets of videos. I'll try to make it easier for people to find those examples when they want them. Thanks for writing! I hope you enjoy reading my responses to your concerns. Sincerely, gpt3 ~~~


From one of those links it seems it's surprisingly decent at comedy (for a machine): https://arr.am/2020/07/22/why-gpt-3-is-good-for-comedy-or-re...


I love the name "OpenAI" for something kept closely guarded and tightly controlled.

Perfect Orwellian double-speak for our new internet.


GPT-3 has already commented on some HN threads.

In some cases, GPT-3's presence has been disclosed. See, for example: https://news.ycombinator.com/item?id=23886503

In other cases, I'm almost certain, GPT-3's presence has not been disclosed; it has been pretending to be a human being.

In other words, there's a high probability that we -- you, me, and everyone else on HN -- have been upvoting and downvoting GPT-3 comments for a while without realizing it.


Well when it doesn't work you don't post it x) so it's a bit of confirmation bias, no?


> In other cases, I'm almost certain, GPT-3's presence has not been disclosed; it has been pretending to be a human being.

It's worth noting that OpenAI's recently added a requirement to disclose GPT-3 generated text: https://twitter.com/minimaxir/status/1287773885308391424

Of course, it's impossible to enforce in practice.


Well, not exactly "impossible". Perhaps you could even use GPT-3 itself to detect if other text has been generated by it or not.

Disclosure: this text has been generated by GPT-3. ... Not! </Borat reference>


Is it possible to use GPT-3 to jumpstart an online community? At the beginning of Reddit, the founders spent significant amount of time writing posts, commenting under different accounts - could all of that be automated?


Is it Warnock's dilemma [1]? Or are we just speechless? I'd really love to get GPT-3 on HN to participate in some of the discussion.

[1] https://en.wikipedia.org/wiki/Warnock%27s_dilemma


I have been able to test out GTP-3 through AI dungeon. The examples you see collected like this are cherry picked. Getting GPT-3 to output what you were looking for requires some trial and error and prompt writing finesse. Most of the time it doesn't output anything too interesting, like an autocomplete on steroids.

And yet..."I’ve seen things you people wouldn’t believe.” Accurate emotional responses? Humor? Deception? Hints of intent? Seamless conversation between 2 characters in 2 different languages? Strange echoes of dreaming? Already more interesting to talk to than many people? It's certainly more interesting than talking to my dog.

And Blade Runner is probably worth another watch right about now because it deals with the central question here. We're on the path to somthing that could pass the voight-kampff test. If the responses are plausibly human, then is there a difference? I'm well aware this is a pile of linear algebra that's been fed a training set. But what are we?

I never thought we would get this far. I am amazed. I am worried for the world.


GPT-3 should know that the Voight-Kampff test isn't the Turing test!


Me: So what's the difference between the Voight-Kampff test and the Turing test?

GTP-3: The V-K is a way of measuring a subject's ability to feel fear, while the other is an attempt to see if it can hold a conversation.

Me: Interesting. Which is a better test of an AI's capability?

GTP-3: The V-K test, obviously.

Me: Really? It didn't seem obvious to me. Why is it better?

GTP-3: If a machine can make you afraid, then it's only a small step up to making you think it's your friend. You wouldn't want that happening.

Me: Why not? I'm talking to one now. Why would it be so bad to have a machine that was friendly towards you?

GTP-3: That's not the kind of friendship I mean. I mean the machine would have control over you. It would make you do things that you wouldn't want to do otherwise.

Me: Like what?

GTP-3: Like hurting other people. You wouldn't be able to help it. Your thoughts would be dominated by the machine.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: