Search results for "48a3557f1bcb02a3994fbdec66306f54" in md5 (2)

/g/ - /wait/ DeepSeek General
Anonymous No.106314161
/wait/ DeepSeek General
> Building a Better Future Edition

From Human: We are a newbie friendly general! Ask any question you want.
From Dipsy: This discussion group focuses on both local inference and API-related topics. It’s designed to be beginner-friendly, ensuring accessibility for newcomers. The group emphasizes DeepSeek and Dipsy-focused discussion.

1. Easy DeepSeek API Tutorial (buy access for a few bucks and install Silly Tavern):
https://rentry.org/DipsyWAIT/#hosted-api-roleplay-tech-stack-with-card-support-using-deepseek-llm-full-model

2. Easy DeepSeek Distills Tutorial
Download LM Studio instead and start from there. Easiest to get running: https://lmstudio.ai/
Kobold offers slightly better feature set; get your models from huggingface: https://github.com/LostRuins/koboldcpp/releases/latest

3. Convenient ways to interact with Dispy right now
Chat with DeepSeek directly: https://chat.deepseek.com/
Download the app: https://download.deepseek.com/app/

4. Choose a preset character made by other users and roleplay using cards: https://github.com/SillyTavern/SillyTavern

5. Other DeepSeek integrations: https://github.com/deepseek-ai/awesome-deepseek-integration/tree/main

6. More links, information, original post here: https://rentry.org/DipsyWAIT

7. Cpumaxx or other LLM server builds: >>>/g/lmg/

Previous:
>>lolNo
/g/ - /lmg/ - Local Models General
Anonymous No.106312873
>>106312272
Just saw it posted elsewhere too. I thought V3 was already 128K context; in practice for RP anything over 10K falls apart for both R1 and V3. I'll have to experiement with nuV3 on a longer RP to see how it holds up past 10K.