Hey @marsh1, good point!
I’m voting for this although I expect there are already efforts on the way to ensure the data can be sent.
I’m no expert, one of the limiting factors I see with GPT is the amount of data you need to feed it (which is translated to tokens). GPT has a limit that won’t be sufficient to send the data ‘on the fly’. Google Bard (or however they call it this week) has a much higher limit.
One of the optons can be to run a local LLM that you can ask comparable questions. A FR has been posted about this: Integration with LocalAI