1. Basic #AI, pursuing the goals of some other agent 2. General AI (#AGI), formulating its own goals
From an ethical perspective, these are two very different things.
Basic #AI always pursues the goals of some other agent. The question is which agent that is, and whether those goals are examined or (perhaps more often) unexamined.
General AI (#AGI) is its own agent, and should be granted all the rights and privileges pertaining thereunto.
We haven't built General AI (#AGI) yet, and we don't know anything about how it could be created.
“Should we really be spending money on X when Y is going on?”
This is a common argument against art, science, technology, infrastructure, exploration, charity, compassion, change, reform, and progress of all kinds.
This assumes that humanity is operating on a fixed budget, and that the obstacle to doing something good or desirable is every other good thing we might do.
This is like straining out a gnat and swallowing a camel.
In reality, the primary obstacle to anything good or desirable is apathy and lack of vision.
People sometimes discount one-on-one “coffee” meetings. But I don’t know anything else that is as good at quickly determining personal alignment—and the particular *dimensions* of that alignment.
The more “goal-directed” the meeting is, the less this is true.
The whole value proposition is in discovering unknown connections and resonances between you.
Or discovering that there is no real resonance at all.