So here is why chatGPT is so disruptive.
You can basically ask it advanced scientific questions about concepts you don’t fully grasp but you know how the technology has been used in certain areas. Case in point, quantum cross validation.
I figured I know about IBM’s qiskit, and I know about quantum cross validation, but I’ve never used qiskit and unsure how I would setup the problem.
So… I asked chatGPT my understanding of the problem,
Then I recalled the bright idea to refine the prompt questions based on a feedback loop by volleying back inferences into chatGPT (essentially iterating over the inference system) asking it to rephrase–providing clarity where necessary–and to make any suggested scientific corrections (important: upped the “top p” to ensure it was using more resources to get a quality answer). Then I fed this refined question back into chatGPT until “is the above information accurate, clarify where it’s not” was answered as “True”, and there was nothing left to clarify and then I finally took away what it coded me.
I have yet to test this as I’m still working towards finetuning my own GPT-Neo, but this is what I’ve been hoping people understand about these system’s. They have generalized on the relationships in language to basically query up these results for us. The more data you have exposure to, the more relationships derived the wider the set of questions the system can respond to.