>>>complete("hi",model="gpt-3.5-turbo-0125")'Hello! How can I assist you today?'
If you don't have an OpenAI API Key π
You could use our FREE proxy site as base_url like this:
>>>frompromplate.llm.openaiimportChatComplete>>>complete=ChatComplete(base_url="https://promplate.dev",api_key="").bind(model="gpt-3.5-turbo-0125")>>>complete("hi")'Hello! How can I assist you today?'
>>>frompromplate.llm.openaiimportTextComplete>>>complete=TextComplete(api_key="...").bind(model="gpt-3.5-turbo-instruct")>>>complete("I am")' just incredibly proud of the team, and their creation of a brand new ship makes'
And you can pass parameters when calling a Complete instance:
There must be something dynamic in your prompt, like user queries, retrieved context, search results, etc. In promplate, simply use {{ }} to insert dynamic data.
>>>importtime>>>frompromplateimportTemplate>>>greet=Template("Greet me. It is {{ time.asctime() }} now.")>>>greet.render(locals())'Greet me. It is Sun Oct 1 03:56:02 2023 now.'
You can run the prompt by complete we created before
>>>complete(_)'Good morning!'
Wow, it works fine. In fact, you can use any python expression inside {{ }}.
Tips: you can combine partial context to a template like this:
>>>importtime>>>frompromplateimportTemplate>>>greet=Template("Greet me. It is {{ time.asctime() }} now.",{"time":time})# of course you can use locals() here too>>>greet.render()# empty parameter is ok'Greet me. It is Sun Oct 1 03:56:02 2023 now.'
Sometimes we don't use a single prompt to complete. Here are some reasons:
Describing a complex task in a single prompt may be difficult
Splitting big task into small ones maybe can reduce the total token usage
If you need structural output, it is easier to specifying data formats separately
We human can think quicker after breaking task into parts
Breaking big tasks into sub tasks may enhance interpretability, reducing debugging time
...
In promplate, We use a Node to represent a single "task". You can initialize a Task with a string like initiating a Template:
>>>frompromplateimportNode>>>greet=Node("Greet me. It is {{ time.asctime() }} now.",locals())>>>greet.render()'Greet me. It is Sun Oct 1 04:16:04 2023 now.'
But there are far more utilities.
Such as, you can add two nodes together magically:
Mention that the return type of .invoke() (/references/chain/#promplate.chain.node.AbstractNode.invoke "AbstractNode.invoke") is ChainContext which combines the context passed everywhere in a right order. __result__ is the output of last Node. It is automatically assigned during .invoke(). You can access it inside the template. Outside the template, you can use .result of a ChainContext to get the last output.
The following three expressions should return the same string:
Congratulations π You've learnt the basic paradigm of using promplate for prompt engineering.
Thanks for reading. There are still lots of features not mentioned here. Learn more in other pages π€ If you have any questions, please feel free to ask us on GitHub Discussions.