There was no speculation in the use cases I listed. It is able to be used for research where citation is needed, already. It is a tool, while you are treating it like another human being, who "steals" and "lies". If you treated it like a tool, you'd be one search away on how to use it in a way where it does not disregard truth, research, scientific sources and results. Another search away on the origins of its (chatgpt's for this example) training data and how it did not "steal", just did what you could do given your internet connection, computer and power supply, albeit at a magnitude faster pace.It's not a tool like a wrench, a hammer, a pen, or a pipette are tools. It is a research tool for aggregating information. Unfortunately, it disregards the truth so it is less than useful for its sole purpose. Research does in fact require citation, and LLMs ("AI" is a marketing term) are built to gloss over their sources, given how much of their sources are taken without the the creator's consent. Speculation as to what it "can be" is baseless hype at this point, as right now it's only good at stealing and lying.
It is a tool and should be treated as such. "AI (or LLMs ) bad" people are the new "internet/computer/television/automobile/etc.etc. bad" people of our time. If you consider yourself to be in this group of people don't worry, you'll come around. This has always happened before and history does repeat itself.
tl;dr simple point/opinion of mine: AI (*LLMs!!!) not bad