This can be done by programming machine to pursue "selfish" goals (survive and reproduce, for example) instead of following orders. It will develop independent behavior, which would seem like free will to external observer. Having own convictions is unfortunately a non-scientific criteria. We don't even know for sure whether humans have convictions and free will, or their actions are pre-determined.
We do know for sure that humans have convictions, at least I do. And as for free will, I can only say it's not looking very good

I think, sadly, free will is mostly a fantasy based on human need for responsibility. We believe in free will because it allows us to punish criminals for their wrong decisions, and look up to stars or leaders for the things they have done. If we believed instead in determinism (and believed determinism was incompatible with free will), which a lot of scientists do currently, it would be very difficult, for example, to punish a criminal. If the actions of every criminal are not in his power, how can we justify punishing them, or anyone? (I guess it doesn't matter anyway, whether we punish or not is already set in stone

) A world where everything is decided by fate (and fate today is mostly interpreted as natural laws + time) oddly enough makes very little sense, which is one reason I do not like hard determinism.
Here we have only two options. Either "give up" and consider understanding as exclusive ability of biological creatures, non-achievable by the AI in principle
I'm not sure if it's impossible in principle, but I think understanding requires a lot of things, like for example a concept of the self, self-reflection, critical thinking, advanced language computing and many others.
Imagine your teacher would be very motivated to find out if you read the text and have deep understanding of it. For example, if you were the only his student. With additional effort I'm sure he could ask questions, analyze your answers and find out whether you understood it well, or your understanding is only superficial.
I think you are right and in my scenario the teacher could
determine for himself with high statistical likeliness whether I have or do not have read the text, and memorized it. If that is enough for you to signify understanding then we are in agreement.
But I think that's not really understanding. I don't think the teacher could ever know for sure whether I understood it or not, because in order to know that he would have to look into my head, no? I could, for example, give completely wrong answer on every question, and still have understood the text. My argument is not so much "an AI can
never know", my argument is instead: "
We never know whether an AI knows/understands anything, we can only judge the
results of the AI's activities". Just like a teacher can only grade you by what you put on the test, he cannot grade you by virtue of things you were thinking, but did not write down. The more I think about it the more I like the analogy now
We don't need AI to compete with Mozart or Iliad translators, if machine reaches the level of top 1% of human translators, we can already claim this task is done. Not mastered like chess, but done on competitive with humans level.
Yes, of course that's true. We do not need an AI to make art for us, we need it mostly for practical reasons, and most translation have practical usage in mind. I do not think we need proper AI for this even, google translate does a good enough job, if that algorhithm is a little more refined it can probably ""outperform"" most human translators