GeneralThis forum is for non-technical general discussion which can include both Linux and non-Linux topics. Have fun!
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
I make a living trading in the stock market. Right now stock market analysts are flooding the financial news with articles touting Artificial Intelligence as the next great investment opportunity. Every company that sells software or the use of their software is labeling the latest version of their software as AI. Elon Musk is solemnly declaring that AI is potentially hazardous to the future of the human race but otherwise it is a great investment.
What do you think? Do you think that software which is labeled as AI is a great investment? Do you think that AI may become a competitor to the human race?
Just remember the old saw – "GIGO = Garbage In, Garbage Out." What is called "AI" is a novel set of algorithms that are essentially self-tuning, but to refer to them as "intelligence" is a deliberate misnomer.
A couple of years ago, I read a thriller based on the idea of a scientist working in AI development teaming up with a hedge fund director to develop a self-learning AI for stock trading. The trouble was, it learned too well. And when they tried to switch it off, it copied itself elsewhere and went on trading and making money for itself, while doing its best to kill off its creator. It ended up owning the company!
machines always do something stronger/more accurate/better/faster/cheaper. Including the steam engine and the computer. But the machine is not intelligent and cannot invent anything, can't be creative.
Computers learn a lot and help solve problems where humans fail.
It is not the computer that is "dangerous to man's future", but man himself.
Just remember the old saw – "GIGO = Garbage In, Garbage Out." What is called "AI" is a novel set of algorithms that are essentially self-tuning, but to refer to them as "intelligence" is a deliberate misnomer.
I believe that there are different levels of intelligence. A nice example of this is what we see in nature.
A simple algorithm such as
if(...) else (...) is already a form of intelligence / decision making.
Coming up with creative solutions is one aspect of human intelligence but it isn't absolutely necessary to create such a machine.
The future is going to have robots that drive cars for you. This is sort of like the Johnny cab we see in Total Recall.
Some robots will be used in wars. The ability to see, understand what he is seeing, and shooting is something the robots will be doing.
Usually, the goal is to assist humans in some task rather than building a human level intelligence, like we see in Star Trek.
Is Google involved in AI?
“Google has not fallen behind in artificial intelligence development. It is still a leading company in the field and continues to invest in and make significant contributions to AI research and development.Feb 8, 2023
So is IBM.
Quote:
watsonx.ai
Train, validate, tune, and deploy foundation and machine learning models with ease. General availability of watsonx.ai is expected in July.
But the machine is not intelligent and cannot invent anything, can't be creative.
At this point in time I think you're correct. Ray Kurzweil, a notable futurist, predicts that we will have a singularity in AI by 2030. He predicts that AI will outstrip human intelligence.
I have been hearing about AI for decades, I read somewhere the definition for AI changed. It no longer includes consciousness.
With that said, I believe the only major impact on most of us is it will make it just about impossible to get a real person when calling a help center. This includes your medical insurance provider.
I hope here in the US (and elsewhere), they pass a law that states:
When chatting with an AI, it must clearly announce that fact.
You must still be given the option to talk to a biological human.
Yes, it is still not intelligence, it is just the usual case: the machine does something better than people. In this case, the machine has a much better memory and can remember many situations, so it can choose a better move. That is machine learning, not intelligence. You could say brute force, but that doesn't sound as good.
AI will help us to do a lot of things better, like organizing, establishing diagnoses, may help to decide in some cases, and it will always prove they can calculate faster and remember a lot of data.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.