Since the most prevalent LLM's (that I know of) are based on the same model of calculating the answers based on probability of what the training material would provide. In other words, you'll not get out more than you put in, and in fact you're more like to lose something. As the internet fills up with AI sourced trash, that effect will likely get worse as it gets trained on it's ownFrom a perceived oracle to another "average joe".
The newer reasoning models will probably bring even worse things. But at least the wealthy will be happy getting wealthier! So all's good I 'spose?