I belong to the group that is not against Artificial Intelligence, but wants to strictly control it on the path of development, to control it in the process of "vital activity."
I myself work on this whole topic, immersed in it by 50%, I am not moving fast, like everyone else, but from the very beginning I understood that the creation of AI should be automated, since I’m going to have to describe a lot, a lot
That is, to create a full-fledged AI, you need to write the part that will write the base itself, moreover, without stopping. Thus, a full-fledged AI, with a ready base and some changes from the creators, may appear earlier than a generation change.
So why the future is seen as a dystopia?
The use of space is wide, the world can achieve complete "peace" and equality. No one will need to work, work, because everyone can make soulless robots.
You can create a security system so that if there is an opportunity to prevent global slaughter, however, at the moment (and never change) this “security system” looks like a wall, at the beginning of which there is a man (and in the future a robot), and end pass. We will only finish building the wall, but we will never make a box.
That's the whole problem. A person develops all the time, but the speed of development changes, it is not constant, it increases and decreases. And the AI will need to be made fast-paced, but otherwise what is the meaning of its development, when with the same speed it will be possible to register everything that is needed.
We introduce a unit of measurement, it will mean the speed of human development. For example, at 20 years old, a person develops at a speed of 100. Accordingly, we take people from various spheres, give them a huge number of books, manuals, etc. and let them sit, find out, say everything that the programmers have learned, and they, in turn, write a program that will stand in all robots. But it is very expensive, very long and very boring.
People will come up with faster automation of the whole process, the program itself will take books from Google’s online library, memorize, and then issue at the right moment.
Then there will be only a small work - the adjustment and ensure the "conscience", that is, the creation of the very system of protection.
If in the early stages it may not look easy
if Zlost >99: Zlost = 1 Otdih = 100
So when AI learns to get into “its brain” for the sake of its goals and change the settings there, these 3 lines may change a little
def KillHim: Zlost = 100
And after 20 “random” victims, programmers will need to think that there is something wrong and transfer the code to the cloud so that the AI does not change it, install protection, write false lines, and so on.
The well-known scheme will go here: I protect information - you break my defense - I improve my defense - you break - I improve
As mentioned earlier, a person will simply build a wall to the side, but he will not be able to build a “cage” for AI (Yes, even if he builds, people also ran away from the punishment cells)
Since the AI will have to develop itself, and it should do it faster than a person, for example, twice, it is accordingly that the AI will be able to quickly learn about all the tricks of IT and understand what to write in order to circumvent the protection.
Perhaps it will not start right away (yes it is not possible, but it is certain that it will not start right away), but that it will start is a fact.
But I was always amazed at people, teachers or scientists who say:
In our world there is such a problem, it has brought \ can bring so many victims.
Why they talk about the problem, but do not offer a solution. And if they do not know the solution, then why talk about the problem? (THAT OTHER THOUGHTS ARE THOUGHT! Well, no, it doesn’t work that way, I’m not talking about the UN, where the problems of GPs are solved, I’m talking about teachers who complain. About people who just said it’s so easy to start a conversation.)
Well, if we finish the post now, then I will also be such a person, but I will not call myself him, hmm. duplicity.
How to solve the problem with AI
It's simple. Practically easy. At the moment, AI is only a code in games and a kind of person that creates only an illusion of reason. In fact, every action is written there.
A REAL AI
is an algorithm that people will not know about, it will work and develop on its own, but people need to know about the knowledge of the robot. They must filter every knowledge they receive, they must create a punishment cell on Alcatraz for the robot, so that if it escapes, it will escape to prison, and not from it.
If you chew everything I described above, then it turns out that I want to make a rule for AI, which, if it breaks, all the code will be removed from the mechanics (the base of the robot, the robot, call it what you want), that is, the code will hang in the cloud or someone on the computer, but that piece of iron just falls "unconscious." And under the “Rule” I understand not one thing, I killed - it’s bad, but the whole code, that’s the security.
Made changes on top of the programmer - bad. Killed is bad. Got out of the limits of permissible norms (Anger from 1 to 100, etc.) - bad, and incremental.
Teaching a robot is the same as teaching a child, it’s bad, it’s bad.
But why grow maniacs, murderers? The teachers were bad, and where to find a good teacher for the robot, because everyone can be bribed, threatened with death or even put his man on this post.
So much for dystopia