Skip to content

Commit bece924

Browse files
committed
Release 0.8.21
1 parent c4db3ef commit bece924

File tree

6 files changed

+725
-350
lines changed

6 files changed

+725
-350
lines changed

poetry.lock

Lines changed: 266 additions & 327 deletions
Some generated files are not rendered by default. Learn more about customizing how changed files appear on GitHub.

pyproject.toml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
[tool.poetry]
22
name = "humanloop"
3-
version = "0.8.20"
3+
version = "0.8.21"
44
description = ""
55
readme = "README.md"
66
authors = []

reference.md

Lines changed: 318 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -601,6 +601,321 @@ Controls how the model uses tools. The following options are supported:
601601
</dl>
602602

603603

604+
</dd>
605+
</dl>
606+
</details>
607+
608+
<details><summary><code>client.prompts.<a href="src/humanloop/prompts/client.py">call_stream</a>(...)</code></summary>
609+
<dl>
610+
<dd>
611+
612+
#### 📝 Description
613+
614+
<dl>
615+
<dd>
616+
617+
<dl>
618+
<dd>
619+
620+
Call a Prompt.
621+
622+
Calling a Prompt calls the model provider before logging
623+
the request, responses and metadata to Humanloop.
624+
625+
You can use query parameters `version_id`, or `environment`, to target
626+
an existing version of the Prompt. Otherwise the default deployed version will be chosen.
627+
628+
Instead of targeting an existing version explicitly, you can instead pass in
629+
Prompt details in the request body. In this case, we will check if the details correspond
630+
to an existing version of the Prompt. If they do not, we will create a new version. This is helpful
631+
in the case where you are storing or deriving your Prompt details in code.
632+
</dd>
633+
</dl>
634+
</dd>
635+
</dl>
636+
637+
#### 🔌 Usage
638+
639+
<dl>
640+
<dd>
641+
642+
<dl>
643+
<dd>
644+
645+
```python
646+
import datetime
647+
648+
from humanloop import Humanloop
649+
650+
client = Humanloop(
651+
api_key="YOUR_API_KEY",
652+
)
653+
response = client.prompts.call_stream(
654+
version_id="string",
655+
environment="string",
656+
path="string",
657+
id="string",
658+
messages=[
659+
{
660+
"content": "string",
661+
"name": "string",
662+
"tool_call_id": "string",
663+
"role": "user",
664+
"tool_calls": [
665+
{
666+
"id": "string",
667+
"type": "function",
668+
"function": {"name": "string"},
669+
}
670+
],
671+
}
672+
],
673+
prompt={"model": "string"},
674+
inputs={"string": {"key": "value"}},
675+
source="string",
676+
metadata={"string": {"key": "value"}},
677+
start_time=datetime.datetime.fromisoformat(
678+
"2024-01-15 09:30:00+00:00",
679+
),
680+
end_time=datetime.datetime.fromisoformat(
681+
"2024-01-15 09:30:00+00:00",
682+
),
683+
source_datapoint_id="string",
684+
trace_parent_id="string",
685+
user="string",
686+
prompts_call_stream_request_environment="string",
687+
save=True,
688+
log_id="string",
689+
provider_api_keys={
690+
"openai": "string",
691+
"ai_21": "string",
692+
"mock": "string",
693+
"anthropic": "string",
694+
"bedrock": "string",
695+
"cohere": "string",
696+
"openai_azure": "string",
697+
"openai_azure_endpoint": "string",
698+
},
699+
num_samples=1,
700+
return_inputs=True,
701+
logprobs=1,
702+
suffix="string",
703+
)
704+
for chunk in response:
705+
yield chunk
706+
707+
```
708+
</dd>
709+
</dl>
710+
</dd>
711+
</dl>
712+
713+
#### ⚙️ Parameters
714+
715+
<dl>
716+
<dd>
717+
718+
<dl>
719+
<dd>
720+
721+
**version_id:** `typing.Optional[str]` — A specific Version ID of the Prompt to log to.
722+
723+
</dd>
724+
</dl>
725+
726+
<dl>
727+
<dd>
728+
729+
**environment:** `typing.Optional[str]` — Name of the Environment identifying a deployed version to log to.
730+
731+
</dd>
732+
</dl>
733+
734+
<dl>
735+
<dd>
736+
737+
**path:** `typing.Optional[str]` — Path of the Prompt, including the name. This locates the Prompt in the Humanloop filesystem and is used as as a unique identifier. For example: `folder/name` or just `name`.
738+
739+
</dd>
740+
</dl>
741+
742+
<dl>
743+
<dd>
744+
745+
**id:** `typing.Optional[str]` — ID for an existing Prompt.
746+
747+
</dd>
748+
</dl>
749+
750+
<dl>
751+
<dd>
752+
753+
**messages:** `typing.Optional[typing.Sequence[ChatMessageParams]]` — The messages passed to the to provider chat endpoint.
754+
755+
</dd>
756+
</dl>
757+
758+
<dl>
759+
<dd>
760+
761+
**tool_choice:** `typing.Optional[PromptsCallStreamRequestToolChoiceParams]`
762+
763+
Controls how the model uses tools. The following options are supported:
764+
- `'none'` means the model will not call any tool and instead generates a message; this is the default when no tools are provided as part of the Prompt.
765+
- `'auto'` means the model can decide to call one or more of the provided tools; this is the default when tools are provided as part of the Prompt.
766+
- `'required'` means the model can decide to call one or more of the provided tools.
767+
- `{'type': 'function', 'function': {name': <TOOL_NAME>}}` forces the model to use the named function.
768+
769+
</dd>
770+
</dl>
771+
772+
<dl>
773+
<dd>
774+
775+
**prompt:** `typing.Optional[PromptKernelRequestParams]` — Details of your Prompt. A new Prompt version will be created if the provided details are new.
776+
777+
</dd>
778+
</dl>
779+
780+
<dl>
781+
<dd>
782+
783+
**inputs:** `typing.Optional[typing.Dict[str, typing.Optional[typing.Any]]]` — The inputs passed to the prompt template.
784+
785+
</dd>
786+
</dl>
787+
788+
<dl>
789+
<dd>
790+
791+
**source:** `typing.Optional[str]` — Identifies where the model was called from.
792+
793+
</dd>
794+
</dl>
795+
796+
<dl>
797+
<dd>
798+
799+
**metadata:** `typing.Optional[typing.Dict[str, typing.Optional[typing.Any]]]` — Any additional metadata to record.
800+
801+
</dd>
802+
</dl>
803+
804+
<dl>
805+
<dd>
806+
807+
**start_time:** `typing.Optional[dt.datetime]` — When the logged event started.
808+
809+
</dd>
810+
</dl>
811+
812+
<dl>
813+
<dd>
814+
815+
**end_time:** `typing.Optional[dt.datetime]` — When the logged event ended.
816+
817+
</dd>
818+
</dl>
819+
820+
<dl>
821+
<dd>
822+
823+
**source_datapoint_id:** `typing.Optional[str]` — Unique identifier for the Datapoint that this Log is derived from. This can be used by Humanloop to associate Logs to Evaluations. If provided, Humanloop will automatically associate this Log to Evaluations that require a Log for this Datapoint-Version pair.
824+
825+
</dd>
826+
</dl>
827+
828+
<dl>
829+
<dd>
830+
831+
**trace_parent_id:** `typing.Optional[str]` — The ID of the parent Log to nest this Log under in a Trace.
832+
833+
</dd>
834+
</dl>
835+
836+
<dl>
837+
<dd>
838+
839+
**user:** `typing.Optional[str]` — End-user ID related to the Log.
840+
841+
</dd>
842+
</dl>
843+
844+
<dl>
845+
<dd>
846+
847+
**prompts_call_stream_request_environment:** `typing.Optional[str]` — The name of the Environment the Log is associated to.
848+
849+
</dd>
850+
</dl>
851+
852+
<dl>
853+
<dd>
854+
855+
**save:** `typing.Optional[bool]` — Whether the request/response payloads will be stored on Humanloop.
856+
857+
</dd>
858+
</dl>
859+
860+
<dl>
861+
<dd>
862+
863+
**log_id:** `typing.Optional[str]` — This will identify a Log. If you don't provide a Log ID, Humanloop will generate one for you.
864+
865+
</dd>
866+
</dl>
867+
868+
<dl>
869+
<dd>
870+
871+
**provider_api_keys:** `typing.Optional[ProviderApiKeysParams]` — API keys required by each provider to make API calls. The API keys provided here are not stored by Humanloop. If not specified here, Humanloop will fall back to the key saved to your organization.
872+
873+
</dd>
874+
</dl>
875+
876+
<dl>
877+
<dd>
878+
879+
**num_samples:** `typing.Optional[int]` — The number of generations.
880+
881+
</dd>
882+
</dl>
883+
884+
<dl>
885+
<dd>
886+
887+
**return_inputs:** `typing.Optional[bool]` — Whether to return the inputs in the response. If false, the response will contain an empty dictionary under inputs. This is useful for reducing the size of the response. Defaults to true.
888+
889+
</dd>
890+
</dl>
891+
892+
<dl>
893+
<dd>
894+
895+
**logprobs:** `typing.Optional[int]` — Include the log probabilities of the top n tokens in the provider_response
896+
897+
</dd>
898+
</dl>
899+
900+
<dl>
901+
<dd>
902+
903+
**suffix:** `typing.Optional[str]` — The suffix that comes after a completion of inserted text. Useful for completions that act like inserts.
904+
905+
</dd>
906+
</dl>
907+
908+
<dl>
909+
<dd>
910+
911+
**request_options:** `typing.Optional[RequestOptions]` — Request-specific configuration.
912+
913+
</dd>
914+
</dl>
915+
</dd>
916+
</dl>
917+
918+
604919
</dd>
605920
</dl>
606921
</details>
@@ -1037,12 +1352,6 @@ client.prompts.upsert(
10371352
provider="openai",
10381353
max_tokens=-1,
10391354
temperature=0.7,
1040-
top_p=1.0,
1041-
presence_penalty=0.0,
1042-
frequency_penalty=0.0,
1043-
other={},
1044-
tools=[],
1045-
linked_tools=[],
10461355
commit_message="Initial commit",
10471356
)
10481357

@@ -9603,7 +9912,9 @@ from humanloop import Humanloop
96039912
client = Humanloop(
96049913
api_key="YOUR_API_KEY",
96059914
)
9606-
client.logs.delete()
9915+
client.logs.delete(
9916+
id="string",
9917+
)
96079918

96089919
```
96099920
</dd>

src/humanloop/core/client_wrapper.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -16,7 +16,7 @@ def get_headers(self) -> typing.Dict[str, str]:
1616
headers: typing.Dict[str, str] = {
1717
"X-Fern-Language": "Python",
1818
"X-Fern-SDK-Name": "humanloop",
19-
"X-Fern-SDK-Version": "0.8.20",
19+
"X-Fern-SDK-Version": "0.8.21",
2020
}
2121
headers["X-API-KEY"] = self.api_key
2222
return headers

src/humanloop/logs/client.py

Lines changed: 6 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -196,7 +196,9 @@ def delete(
196196
client = Humanloop(
197197
api_key="YOUR_API_KEY",
198198
)
199-
client.logs.delete()
199+
client.logs.delete(
200+
id="string",
201+
)
200202
"""
201203
_response = self._client_wrapper.httpx_client.request(
202204
"logs",
@@ -472,7 +474,9 @@ async def delete(
472474
473475
474476
async def main() -> None:
475-
await client.logs.delete()
477+
await client.logs.delete(
478+
id="string",
479+
)
476480
477481
478482
asyncio.run(main())

0 commit comments

Comments
 (0)