(Replying to PARENT post)
Even if you model it like a smart person that doesn't think faster on average, there is the issue that a few minutes later you are dealing with a small army of smart people that are perfectly aligned with each other, and are capable of tricks like varying their clock rates based on cost, sharing data/info/knowledge directly, separation of training and inference onto different hardware, on-demand multitasking without task-switching costs, ability to generally trade-off computational space for time for energy, etc.
A silicon intelligence that is approximately human equivalent gets a lot of potentially game-changing capabilities simply by virtue of it's substrate and the attendant infrastructure.
(Replying to PARENT post)
Exactly, the right model is probably something like it will be in relation to humans as humans are to frogs. Frogs can't even begin to comprehend even the most basic of human motivations or plans.
(Replying to PARENT post)
(Replying to PARENT post)
However, all of this is moot if the team developing the AI does not even try to align it.
(Replying to PARENT post)
(Replying to PARENT post)
The more someone believes in the dangers of ai-alignment, the less faith they should have that it can be solved.
(Replying to PARENT post)
https://intelligence.org/2017/10/13/fire-alarm/
Even people like Eric Schmidt seem to downplay it (in a recent podcast with Sam Harris) - just saying โsmart people will turn it offโ. If it thinks faster than us and has goals not aligned with us this is unlikely to be possible.
If weโre lucky building it will have some easier to limit constraint like nuclear weapons do, but Iโm not that hopeful about this.
If people could build nukes with random parts in their garage Iโm not sure humanity would have made it past that stage. People underestimated the risks with nuclear weapons initially too and thatโs with the risk being fairly obvious. The nuanced risk of unaligned AGI is a little harder to grasp even for people in the field.
People seem to model it like a smart person rather than something that thinks truly magnitudes faster than us.
If an ant wanted to change the goals of humanity, would it succeed?