Age | Commit message (Expand) | Author |
---|---|---|
2023-05-30 | Use WizardLM instead of VicunaHEADmaster | Anthony Wang |
2023-05-29 | Update README | Anthony Wang |
2023-05-29 | Hide stderr output | Anthony Wang |
2023-05-29 | Use GPU acceleration for llama.py | Anthony Wang |
2023-04-10 | Fix bug | Anthony Wang |
2023-04-10 | Stop after 1024 tokens | Anthony Wang |
2023-04-10 | Don't generate infinitely | Anthony Wang |
2023-04-10 | Finally! | Anthony Wang |
2023-04-10 | Return 204 on favicon.ico request, pad prompt with "### Human:", "### Assista... | Anthony Wang |
2023-04-10 | Fix typo | Anthony Wang |
2023-04-10 | Larger context | Anthony Wang |
2023-04-10 | Fix permissions | Anthony Wang |
2023-04-10 | Add llama streaming script | Anthony Wang |
2022-07-15 | Adjust parameters and ignore favicon.ico | Anthony Wang |
2022-07-15 | Print debugging output | Anthony Wang |
2022-07-15 | Load and run model | Anthony Wang |
2022-07-15 | Create a simple Python Unix socket HTTP server | Anthony Wang |
2022-07-15 | Initial commit | Anthony Wang |