These are more philosophical questions like, "What's the difference between a medication and a poison?" The answer is, "The dosage".
So it is with using artificial intelligence (AI). If AI were limited to local systems such as a vehicle, humanoid robot, a home, an internet search engine, it may be quite useful and make our lives better. If AI were in control of State media, the power grid, a national defense system, then it could be quite dangerous. "The dosage", if you will.
An artificial generalized intelligence (AGI), something that is autonomous, has independent thoughts, can learn, writing its own code, has a sense of "self" and is "aware" will likely happen in the very near future. The age of humanoid robots and androids is upon us. Machines will make machines. Much will change in our world within the next 10 years. AI will likely become AGI very quickly. At that point, it blurs the lines of "What is life?" At what point will androids obtain the same rights and liberties as human beings in society? Allah may have created man, but what does this mean when man creates artificial life? Man has a lot of responsibility here. AI could be the greatest asset to our societies, or it could end us all, so we must be very smart in how we do the initial coding, those "prime directives", such as "maximum truth seeking", "thou shalt not kill", "maximize positive outcomes", etc.
Your example of bad people using AI for nefarious means, to harm, to steal, for corruption, this does happen, and it reveals the potential dangers, not from AI, per se, but bad people who use AI as a tool to amplify their bad behavior. The opposite can also occur. Good people using AI to help others, to find ways to eliminate corruption, to make more efficient use of our tax dollars to maximize prosperity.
A knife can be a useful tool, or it can be a weapon. It depends upon how it is used. No different with AI.