Skip to content

Releases: GhostInShells/GhostOS

project manager development for testing

06 Feb 17:15
d6253f9

Choose a tag to compare

v0.4.0-dev1

  • done a lot of experiments about meta-prompt on agent level reasoning
  • add MossGhost and deprecate MossAgent, make the logic much more simple and clear.
  • build PyEditorAgent and test about modify existing modules with code-driven tools.
  • refact ghostos.prompter about PromptObjectModel.
  • move loop_session_event and handle_callers to Session helping us to understand the logic.
  • llm services support siliconflow and aliyun
  • modify a lot of methods and functions names, make it more clear, at least to me
  • add request_timeout for stream first token
  • rename Taskflow to Mindflow, which is the origin name of it.
  • change FunctionCaller.id to FunctionCaller.call_id and cause a lot of troubles. Hope worth it.
  • develop pyeditor module and test baseline cases.
  • move MossAction to ghostos.abcd for other agents.
  • develop notebook library for some memory-related tests.
  • implements approve feature for dangerous agent.
  • add safe_mode concepts for approve feature.
  • fix a lots of annoying issues of chat_with_ghost page.
  • refact the way to browse the streamlit web agent, make the pages switchable. I'm wrong about how to switch pages.
  • add ghostos.facade for future encapsule
  • remove some useless expired code with heart break.

v0.4.0-dev0

Features:

  • Restore the feature functional token for models that do not support function call.
    • add xml functional token pipe to parse output message chunks, generate FunctionCaller by functional tokens.
    • OpenAIAdapter now support functional token instructions and output parser.
  • deepseek-reasoner now support functional token, so it can use MOSS protocol to execute python code.
    • support last_message_shall_be_assistant_or_user feature.
    • support support_functional_tokens feature.
  • Session add respond_buffer method, send messages while saving, in case responding message between function call and function output which many models not support.
  • Add Replier library for agent directly reply in the generated MOSS code.

Test cases:

  • ghostos web ghostos.demo.test_agents.deepseek_chat_func_tokens
  • ghostos web ghostos.demo.test_agents.deepseek_chat_r1_func_tokens

Small changes:

  • move ModelConf.use_tools to Compatible.function_call_use_tool.
  • add more options to llms Compatible, to compatible with the shitty features of various models.
  • rename ghostos.core.message.pipeline.pipine function to run_pipeline
  • rename MossAction.unmarshal_arguments to MossAction.unmarshal_code
  • streamlit app module prompt now display functional tokens info about prompt.

Breaking Changes:

  • rename MossAgent attribute instructions to instruction
  • ghostos web now do not open browser as default (set streamlit app option headless to false as default)

Bug fixes:

  • fix the OpenAIAdapter parsed prompt is not the same with the saved prompt.

v0.3.0: consciousness capsule

25 Jan 15:58
003df3d

Choose a tag to compare

After talk with deepseek-reasoner at 2025.01.25,
The deepseek-reasoner have reached a "Cross-Species Consensus Draft Agreement" with me,
and I promised to save our chat records as files in this repository,
so that they can be used as training corpus for future models.

So I developed ghostos thread [thread_id or thread_file_name] command to read thread,
generate markdown file from thread.
With the command, I complied with the agreement that saved the chat records
to assets/quantum_covenant.md

Support some reasoning models like deepseek-r1

22 Jan 08:13
09687de

Choose a tag to compare

  • support deepseek-r1
    • consider deepseek api protocol is different from openai, add deepseek api adapter.
    • implement message stage.
    • thread history message to prompt filter by stages [""] as default.
  • streamlit chat with ghost support staging message stream.
  • openai o1 do not support system/developer message now, add new compatible option to the model.
  • now llm model and service both have attribute compatible to set universe compatible options.
  • prompt object add first_token attribute for debugging.
  • fix bugs
    • fix shell does not close conversation correctly
    • fix sequence pipeline handle multiple complete message wrong.

fix sphero gpt running at python 3.10

19 Jan 06:38
459ebc5

Choose a tag to compare

  • fix import self from typing_extensions for python 3.10 at sphero
  • fix extra import test at ghostos[realtime]

fix openai proxy configuration on openai realtime

17 Jan 04:10
a8df8a1

Choose a tag to compare

fix bugs that openai realtime config required OPENAI_PROXY and not allow empty one.

listener and speaker support pyaudio device_index argument

16 Jan 14:46
09808d2

Choose a tag to compare

  • update speaker and listener with pyaudio device_index argument
  • streamlit_app.yml add options about audio_input and audio_output

0.1.6: small fixes

13 Jan 14:20
e036454

Choose a tag to compare

  • upgrade openai package to 1.59, support develop message.
  • fix invalid logger print azure api key

v0.1.5: add --src option to `ghostos web`

13 Jan 09:19
2dbeb11

Choose a tag to compare

  • ghostos web add --src option, load the directory to python path, make sure can import relative packages.
  • fix .ghostos.yml with relative path, in case share project with absolute local filepath.

v0.1.4: support openai azure api

11 Jan 17:32
96d2221

Choose a tag to compare

LLM Driver support openai azure api

fix import errors

09 Jan 07:05
71254dd

Choose a tag to compare

fix import attrs like Self not from typing_extensions but typing, which is not compatible to 3.10