Skip to content

Commit 246c728

Browse files
author
Andrei Bratu
committed
Docstrings, small refactors
1 parent 8556149 commit 246c728

File tree

8 files changed

+196
-166
lines changed

8 files changed

+196
-166
lines changed

src/humanloop/client.py

Lines changed: 85 additions & 106 deletions
Original file line numberDiff line numberDiff line change
@@ -27,6 +27,10 @@
2727

2828

2929
class ExtendedEvalsClient(EvaluationsClient):
30+
"""
31+
Provides high-level utilities for running Evaluations on the local runtime.
32+
"""
33+
3034
client: BaseHumanloop
3135

3236
def __init__(
@@ -50,7 +54,7 @@ def run(
5054
:param name: the name of the Evaluation to run. If it does not exist, a new Evaluation will be created under your File.
5155
:param dataset: the dataset to map your function over to produce the outputs required by the Evaluation.
5256
:param evaluators: define how judgments are provided for this Evaluation.
53-
:param workers: the number of threads to process datapoints using your function concurrently.
57+
:param workers: Number of concurrent threads for processing datapoints.
5458
:return: per Evaluator checks.
5559
"""
5660
if self.client is None:
@@ -67,6 +71,10 @@ def run(
6771

6872

6973
class ExtendedPromptsClient(PromptsClient):
74+
"""
75+
Adds utility for populating Prompt template inputs.
76+
"""
77+
7078
populate_template = staticmethod(populate_template) # type: ignore [assignment]
7179

7280

@@ -90,16 +98,14 @@ def __init__(
9098
opentelemetry_tracer_provider: Optional[TracerProvider] = None,
9199
opentelemetry_tracer: Optional[Tracer] = None,
92100
):
93-
"""See docstring of :func:`BaseHumanloop.__init__(...)`
94-
95-
This class extends the base client with custom evaluation utilities
96-
and decorators for declaring Files in code.
101+
"""
102+
Extends the base client with custom evaluation utilities and
103+
decorators for declaring Files in code.
97104
98-
The Humanloop SDK File decorators use OpenTelemetry internally. You can provide a
99-
TracerProvider and a Tracer if you'd like to integrate them with your existing
100-
telemetry system. Otherwise, an internal TracerProvider will be used.
101-
If you provide only the `TraceProvider`, the SDK will log under a Tracer
102-
named `humanloop.sdk`.
105+
The Humanloop SDK File decorators use OpenTelemetry internally.
106+
You can provide a TracerProvider and a Tracer to integrate
107+
with your existing telemetry system. If not provided,
108+
an internal TracerProvider will be used.
103109
"""
104110
super().__init__(
105111
base_url=base_url,
@@ -116,6 +122,7 @@ def __init__(
116122
self.prompts = ExtendedPromptsClient(client_wrapper=self._client_wrapper)
117123

118124
# Overload the .log method of the clients to be aware of Evaluation Context
125+
# and the @flow decorator providing the trace_id
119126
self.prompts = overload_log(client=self.prompts)
120127
self.prompts = overload_call(client=self.prompts)
121128
self.flows = overload_log(client=self.flows)
@@ -145,22 +152,12 @@ def prompt(
145152
self,
146153
*,
147154
path: str,
148-
template: Optional[str] = None,
149155
):
150-
"""Decorator for declaring a [Prompt](https://humanloop.com/docs/explanation/prompts) in code.
151-
152-
The decorator intercepts calls to LLM provider APIs and creates
153-
a new Prompt file based on the hyperparameters used in the call.
154-
If a hyperparameter is specified in the `@prompt` decorator, then
155-
they override any value intercepted from the LLM provider call.
156-
157-
If the [Prompt](https://humanloop.com/docs/explanation/prompts) already exists
158-
on the specified path, a new version will be upserted when any of the above change.
159-
160-
Here's an example of declaring a (Prompt)[https://humanloop.com/docs/explanation/prompts] in code:
156+
"""Auto-instrument LLM provider and create [Prompt](https://humanloop.com/docs/explanation/prompts)
157+
Logs on Humanloop from them.
161158
162159
```python
163-
@prompt(template="You are an assistant on the following topics: {{topics}}.")
160+
@prompt(path="My Prompt")
164161
def call_llm(messages):
165162
client = OpenAI(api_key=os.getenv("OPENAI_API_KEY"))
166163
return client.chat.completions.create(
@@ -170,57 +167,63 @@ def call_llm(messages):
170167
max_tokens=200,
171168
messages=messages,
172169
).choices[0].message.content
173-
```
174-
175-
This will create a [Prompt](https://humanloop.com/docs/explanation/prompts] with the following attributes:
176170
177-
```python
171+
Calling the function above creates a new Log on Humanloop
172+
against this Prompt version:
178173
{
174+
provider: "openai",
179175
model: "gpt-4o",
180176
endpoint: "chat",
181-
template: "You are an assistant on the following topics: {{topics}}.",
182-
provider: "openai",
183177
max_tokens: 200,
184178
temperature: 0.8,
185179
frequency_penalty: 0.5,
186180
}
181+
```
187182
188-
Every call to the decorated function will create a Log against the Prompt. For example:
189-
190-
```python
191-
call_llm(messages=[
192-
{"role": "system", "content": "You are an assistant on the following topics: finance."}
193-
{"role": "user", "content": "What can you do?"}
194-
])
183+
If a different model, endpoint, or hyperparameter is used, a new
184+
Prompt version is created. For example:
195185
```
186+
@humanloop_client.prompt(path="My Prompt")
187+
def call_llm(messages):
188+
client = OpenAI(api_key=os.getenv("OPENAI_API_KEY"))
189+
client.chat.completions.create(
190+
model="gpt-4o-mini",
191+
temperature=0.5,
192+
).choices[0].message.content
196193
197-
The Prompt Log will be created with the following inputs:
198-
```python
194+
client = Anthropic(api_key=os.getenv("ANTHROPIC_API_KEY"))
195+
client.messages.create(
196+
model="claude-3-5-sonnet-20240620",
197+
temperature=0.5,
198+
).content
199+
200+
Calling this function will create two versions of the same Prompt:
199201
{
200-
"inputs": {
201-
"topics": "finance"
202-
},
203-
messages: [
204-
{"role": "system", "content": "You are an assistant on the following topics: finance."}
205-
{"role": "user", "content": "What can you do?"}
206-
]
207-
"output": "Hello, I'm an assistant that can help you with anything related to finance."
202+
provider: "openai",
203+
model: "gpt-4o-mini",
204+
endpoint: "chat",
205+
max_tokens: 200,
206+
temperature: 0.5,
207+
frequency_penalty: 0.5,
208208
}
209-
```
210209
211-
The decorated function should return a string or the output should be JSON serializable. If
212-
the output cannot be serialized, TypeError will be raised.
210+
{
211+
provider: "anthropic",
212+
model: "claude-3-5-sonnet-20240620",
213+
endpoint: "messages",
214+
temperature: 0.5,
215+
}
213216
214-
If the function raises an exception, the log created by the function will have the output
215-
field set to None and the error field set to the string representation of the exception.
217+
And one Log will be added to each version of the Prompt.
218+
```
216219
217220
:param path: The path where the Prompt is created. If not
218221
provided, the function name is used as the path and the File
219222
is created in the root of your Humanloop organization workspace.
220223
221224
:param prompt_kernel: Attributes that define the Prompt. See `class:DecoratorPromptKernelRequestParams`
222225
"""
223-
return prompt_decorator_factory(path=path, template=template)
226+
return prompt_decorator_factory(path=path)
224227

225228
def tool(
226229
self,
@@ -229,27 +232,22 @@ def tool(
229232
attributes: Optional[dict[str, Any]] = None,
230233
setup_values: Optional[dict[str, Any]] = None,
231234
):
232-
"""Decorator for declaring a [Tool](https://humanloop.com/docs/explanation/tools) in code.
235+
"""Manage [Tool](https://humanloop.com/docs/explanation/tools) Files through code.
233236
234-
The decorator inspects the wrapped function's source code, name,
235-
argument type hints and docstring to infer the values that define
236-
the [Tool](https://humanloop.com/docs/explanation/tools).
237+
The decorator inspects the wrapped function's source code to infer the Tool's
238+
JSON Schema. If the function declaration changes, a new Tool version
239+
is upserted with an updated JSON Schema.
237240
238-
If the [Tool](https://humanloop.com/docs/explanation/tools) already exists
239-
on the specified path, a new version will be upserted when any of the
240-
above change.
241-
242-
Here's an example of declaring a [Tool](https://humanloop.com/docs/explanation/tools) in code:
241+
For example:
243242
244243
```python
245-
@tool
244+
# Adding @tool on this function
245+
@humanloop_client.tool(path="calculator")
246246
def calculator(a: int, b: Optional[int]) -> int:
247247
\"\"\"Add two numbers together.\"\"\"
248248
return a + b
249-
```
250249
251-
This will create a [Tool](https://humanloop.com/docs/explanation/tools) with the following attributes:
252-
```python
250+
# Creates a Tool with this JSON Schema:
253251
{
254252
strict: True,
255253
function: {
@@ -267,35 +265,16 @@ def calculator(a: int, b: Optional[int]) -> int:
267265
}
268266
```
269267
270-
Every call to the decorated function will create a Log against the Tool. For example:
268+
The return value of the decorated function must be JSON serializable.
271269
272-
```python
273-
calculator(a=1, b=2)
274-
```
270+
If the function raises an exception, the created Log will have `output`
271+
set to null, and the `error` field populated.
275272
276-
Will create the following Log:
273+
:param path: The path of the File in the Humanloop workspace.
277274
278-
```python
279-
{
280-
"inputs": {
281-
a: 1,
282-
b: 2
283-
},
284-
"output": 3
285-
}
286-
```
275+
:param setup_values: Values needed to setup the Tool, defined in [JSON Schema](https://json-schema.org/)
287276
288-
The decorated function should return a string or the output should be JSON serializable. If
289-
the output cannot be serialized, TypeError will be raised.
290-
291-
If the function raises an exception, the log created by the function will have the output
292-
field set to None and the error field set to the string representation of the exception.
293-
294-
:param path: The path to the Tool. If not provided, the function name
295-
will be used as the path and the File will be created in the root
296-
of your organization's workspace.
297-
298-
:param tool_kernel: Attributes that define the Tool. See `class:ToolKernelRequestParams`
277+
:param attributes: Additional fields to describe the Tool. Helpful to separate Tool versions from each other with details on how they were created or used.
299278
"""
300279
return tool_decorator_factory(
301280
opentelemetry_tracer=self._opentelemetry_tracer,
@@ -310,13 +289,13 @@ def flow(
310289
path: str,
311290
attributes: Optional[dict[str, Any]] = None,
312291
):
313-
"""Decorator for declaring a [Flow](https://humanloop.com/docs/explanation/flows) in code.
292+
"""Trace SDK logging calls through [Flows](https://humanloop.com/docs/explanation/flows).
293+
294+
Use it as the entrypoint of your LLM feature. Logging calls like `prompts.call(...)`,
295+
`tools.call(...)`, or other Humanloop decorators will be automatically added to the trace.
314296
315-
A [Flow](https://humanloop.com/docs/explanation/flows) wrapped callable should
316-
be used as the entrypoint of your LLM feature. Call other functions wrapped with
317-
Humanloop decorators to create a trace of Logs on Humanloop.
297+
For example:
318298
319-
Here's an example of declaring a [Flow](https://humanloop.com/docs/explanation/flows) in code:
320299
```python
321300
@prompt(template="You are an assistant on the following topics: {{topics}}.")
322301
def call_llm(messages):
@@ -330,7 +309,7 @@ def call_llm(messages):
330309
).choices[0].message.content
331310
332311
@flow(attributes={"version": "v1"})
333-
def entrypoint():
312+
def agent():
334313
while True:
335314
messages = []
336315
user_input = input("You: ")
@@ -342,23 +321,23 @@ def entrypoint():
342321
print(f"Assistant: {response}")
343322
```
344323
345-
In this example, the Flow instruments a conversational agent where the
346-
Prompt defined in `call_llm` is called multiple times in a loop. Calling
347-
`entrypoint` will create a Flow Trace under which multiple Prompt Logs
348-
will be nested, allowing you to track the whole conversation session
349-
between the user and the assistant.
324+
Each call to agent will create a trace corresponding to the conversation
325+
session. Multiple Prompt Logs will be created as the LLM is called. They
326+
will be added to the trace, allowing you to see the whole conversation
327+
in the UI.
350328
351-
The decorated function should return a string or the output should be JSON serializable. If
352-
the output cannot be serialized, TypeError will be raised.
329+
If the function returns a ChatMessage-like object, the Log will
330+
populate the `output_message` field. Otherwise, it will serialize
331+
the return value and populate the `output` field.
353332
354-
If the function raises an exception, the log created by the function will have the output
355-
field set to None and the error field set to the string representation of the exception.
333+
If an exception is raised, the output fields will be set to None
334+
and the error message will be set in the Log's `error` field.
356335
357336
:param path: The path to the Flow. If not provided, the function name
358337
will be used as the path and the File will be created in the root
359338
of your organization workspace.
360339
361-
:param flow_kernel: Attributes that define the Flow. See `class:ToolKernelRequestParams`
340+
:param attributes: Additional fields to describe the Flow. Helpful to separate Flow versions from each other with details on how they were created or used.
362341
"""
363342
return flow_decorator_factory(
364343
client=self,

0 commit comments

Comments
 (0)