Skip to content

Welcome to Promplate#

Promplate is a templating framework that progressively enhances your prompt engineering workflow with minimal dependency.

pip install promplate #(1)!
  1. If you want to run the example below, you need to install openai too. You can do so by pip install promplate[openai].

Promplate runs well on python 3.8 - 3.12, and is well-tested on CPython and PyPy.

A simple example#

Let's say I need to greet in foreign language. Let's compose two simple prompts that just work.

from promplate.llm.openai import ChatComplete #(1)!
from promplate import Node

reply = Node.read("reply.j2")
translate = Node.read("translate.j2")

translate.run_config["temperature"] = 0

chain = reply + translate #(2)!

complete = ChatComplete().bind(model="gpt-3.5-turbo")
context = {"lang": "chinese"}
  1. Importing an LLM is optional. If you only use promplate as a templating engine, running pip install promplate needs no dependency.
  2. Chaining nodes is simply adding them together. We believe that nice debug printing is a must for developpment experience. So, with some magic behind the scenes, if you print(chain), you will get </reply/> + </translate/>. This is useful if you have a lot of prompt templates and always use print to debug.
{# import time #}

<|system|>
current time: {{ time.localtime() }}

<|user|>
Say happy new year to me in no more than 5 words.
Note that you must include the year in the message.

Note

This shows some special markup syntax in promplate:

  • Inside {# ... #} are python codes to run in the context. In this case, we want to use time.localtime() to get the current time. So we import it in the template.
  • <|user|> and <|assistant|> are chat markups. It will be formatted into a list[Message] object before being passed to the LLM.
  • Inside {{ ... }} can be any python expressions.
Translate the following message into `{{ lang }}`:
"""
{{ __result__ }}
"""

You may ask, what is {{ __result__ }}?

In fact, promplate will automatically inject some variables into the context. Among them, __result__ is the LLM response of the previous node.


Then call chain.invoke({"lang": "chinese"}, complete).result to get a Chinese greeting relating with the time now.

Why promplate?#

I am a prompt engineer who suffered from the following problems:

Problems#

Writing prompts inside scripts is not elegant#

  • There is no syntax highlighting, no auto completion, no linting, etc.
  • the indenting is ugly, or you have to bare with lots of spaces/tabs in your prompts
  • Some characters must be escaped, like """ inside a python string, or ` inside a JavaScript string.

So in promplate, we support writing prompts in separate files. Of course, you can still write prompts inside scripts too.

details
  • writing prompts in separate files
from promplate import Template

foo = Template.read("path/to/some-template.j2")  // synchronous
bar = await Template.aread("path/to/some-prompt.md")  // asynchronous
from promplate import Template

foo = Template.fetch("https://your-domain.com/path/to/some-template.j2")  // synchronous
bar = await Template.afetch("https://your-domain.com/path/to/some-prompt.md")  // asynchronous

The template name will be their filenames.

>>> print(foo)
<Template some-template>
>>> print(bar)
<Template some-prompt>

  • writing short prompt through literals
from promplate import Template

foo = Template('Translate this into {{ lang }}: \n"""{{ text }}"""')

The template name will be the variable name.

>>> print(foo) #(1)!
<Template foo> #(2)!
  1. repr(foo) and str(foo) are slightly different. repr(foo) will output </foo/>
  2. If you print(Template("...")) so that there is no "variable name", it will be simply <Template>.

  • (new in v0.3) writing chat prompts through magic

>>> from promplate.prompt.chat import user, assistant, system
>>> user > "hello"
{'role': 'user', 'content': 'hello'}
>>> assistant > "hi"
{'role': 'assistant', 'content': 'hi'}
>>> from promplate.prompt.chat import user, assistant, system
>>> [system > "...", user @ "example_user" > "hi"]
[
    {'role': 'system', 'content': '...'},
    {'role': 'user', 'content': 'hi', 'name': 'example_user'}
]

>>> from promplate.prompt.chat import U, A, S
>>> U > "hello"
{'role': 'user', 'content': 'hello'}
>>> A > "hi"
{'role': 'assistant', 'content': 'hi'}
>>> from promplate.prompt.chat import U, A, S
>>> [S > "...", U @ "example_user" > "hi"]
[
    {'role': 'system', 'content': '...'},
    {'role': 'user', 'content': 'hi', 'name': 'example_user'}
]

Chaining prompts is somehow difficult#

Often we need several LLM calls in a process. LCEL is langchain's solution.

Ours is like that, but everything unit is a promplate.Node instance. Router are implemented with 2-3 lines in callback functions through raise Jump(...) statements.

Promplate Nodes are just state machines.

Chat templates are hard to read#

Usually you need to manually construct the message list if you are using a chat model. In promplate, you can write chat templates in separate files, and use a render it as a list.

Identical prompts are hard to reuse & maintain#

Promplate has a component system (same meaning as in frontend ecosystem), which enable you to reuse prompt template fragments in different prompts.

Callbacks and output parsers are hard to bind#

In langchain, you can bind callback to a varity of event types. Promplate has a flexible callback system similarly, but you can bind simple callbacks through decorators like @node.pre_process.

Features#

  • more than templating: components, chat markup
  • more than LLM: callbacks, state machines
  • developer experience: full typing, good printing ...
  • flexibility: underlying ecosystem power

Further reading#

You can the quick-start tutorial, which is a more detailed explanation. If you have any questions, feel free to ask on GitHub Discussions!