Using ChatGPT to write comments/posts about dive equipment

Please register or login

Welcome to ScubaBoard, the world's largest scuba diving community. Registration is not required to read the forums, but we encourage you to join. Joining has its benefits and enables you to participate in the discussions.

Benefits of registering include

  • Ability to post and comment on topics and discussions.
  • A Free photo gallery to share your dive photos with the world.
  • You can make this box go away

Joining is quick and easy. Log in or Register now!

None of the tools are actually remotely close to AI.

They are already AI, depending on which definitions one prefers. But in the field, they're already considered a form of AI.
 
A chatbot is NOT AI.
AI is a program which mimic human brain operation, so it really understands and processes information using an adaptive neural network.
A chatbot is just a text manipulation tool running over a huge database of existing text.
No undrstanding is involved, it is just assembling new text rearranging the existing one.

With recent, rapid progress- that view is becoming obsolete. Listen to the podcast above.
 
Modern AI is nowhere near smart, it's not intelligent. It's software with a few clever gimmicks, and if you're clever enough you can fool people that the software is doing something "intelligent."

Those calling for a "pause" on AI, are really just calling for government power to restrict their competition. AI is literally just software. So how does one restrict AI. By restricting software, you are also naturally restricting speech, the same as there was a law "you cannot create unapproved art." Who is really going to be restricted though? Not Google, Twitter, Apple, etc because they have lots of lawyers. Lets also not forget these companies calling for a pause have been hiring large numbers "AI" or "machine learning" experts for decades. It's not big-tech that will be restricted, it's everyone else.

All of you who are becoming fearful over a chat-bot are falling into a trap. A chat-bot isn't even 0.01% of what it would take to create skynet or one of the hundreds of other various "AI goes rogue and kills humans" movies.
 
With recent, rapid progress- that view is becoming obsolete. Listen to the podcast above.
The other day, I was talking to a 7-year old kid, and showing them my scuba-equipment. The kid excitedly claimed they were also a scuba-diver! They proceeded to try to explain to me how all the scuba equipment worked, and were able to carry on quite a conversation. Obviously, they're not a scuba-diver, but I chose to ask a simple question, "how fast do you come up when you start to run low on air." "Very fast!" they exclaimed!

Lex Fridman is like that 7-year old kid. He knows very little about many subjects, but pretends he knows far more. Any time I've listened to any part of his podcasts (mostly because he's interviewing someone interesting), I practically have to pull my non-existent hair out every time he interrupts.
 
They are already AI, depending on which definitions one prefers. But in the field, they're already considered a form of AI.
Are you familiar with DeepL? It's a translation tool that uses neural networks to learn too. It has been online for 5 years or so and also considered AI. It is a powerful tool and impressive but I think some people in the field use a very liberal definition of AI. ChatGPT seem to have the same issue DeepL has. Once you get to a field where the tool doesn't have a ton of information about in the database, it puts out garbage. Look at your example of the deep diver's accident.
Sam Altman claim GTP is a 'reasoning engine'... but what it look like when you see the output, it looks like it learns context, like DeepL. Recocnizing and learning context by what terminology and phrases are used in what field is not something most people would consider intelligence, IMHO. It's learing how stuff is supposed to look like but it doesn't understand.
A child is intelligent and will ask question in order to understand something, it's not just looking to fill a database with patterns. IMHO being able to understand something is a fundamental part of inteligence.

I think people like Sam Altman and the dude from the article have a vested interest im pushing the 'skynet is going online soon' narrative. It does get a lot of attention and investment money, that's for sure.

Elon Musk is also highly respected and talks about how the new AI is scary. Almost everything this guy talked about and promised turned out to be complete BS. Full self driving (which people have actually paid for and never gotten) and robotaxis, Hyperloop, solar roofs, tesla bot, tunnel boring with 'rocket technology', etc.
I believe it when I see it.
 
All of you who are becoming fearful over a chat-bot are falling into a trap. A chat-bot isn't even 0.01% of what it would take to create skynet or one of the hundreds of other various "AI goes rogue and kills humans" movies.
And even if we had skynet, a Reaper drone in not going to get maintenance or repair from a machine.
All the hi-tech stuff we have is dependent on a bunch of people in the backround running and maintaining it. Nevermind manufacturing.
 
The average American reads at a 7th to 8th grade level. Writing in a clear, concise and logical manner is difficult. 40 years ago I worked in the broadcast TV industry, and among other things developed computer software tools to assist in the preparation of on-air scripts to be read by our news anchors. One feature was scoring the TV news scripts for their reading and comprehension level; the target for broadcast news was 6th grade. I doubt the target has changed much in the intervening years. I write most of our TekTips and target a high school reading level as much as possible. Given the subject matter that is not always possible but questions from our customers enable me to re-write content whenever I notice there is an opportunity for misunderstanding or a lack of clarity.
I did my last 2 years of high school in rural Virginia; I can attest to the poor reading skills. Most of the students in my senior year struggled to read aloud in English class.
 
They are already AI, depending on which definitions one prefers. But in the field, they're already considered a form of AI.
Of course it is a matter of definitions.
But here in my University Campus we have a spinoff company which makes REAL AI, used for autonomous self-driving vehicles, which really understand what's happening in the environment around the vehicle, and take actions which have the potential of killing people.
So in comparison I consider everything lesser to not be TRUE AI.
If interested, this spinoff company is called Vislab and it has been acquired by Ambarella.
Here in Parma we have another spinoff called Henesis, doing even more advanced AI. They are dealing with modelling emotions, not just life-threating decisions...
 
https://www.shearwater.com/products/perdix-ai/

Back
Top Bottom