I've built this because I kept running into context size length limitations on LLMs.

It enables you to split large bodies of text into smaller tokens based on a defined character count (assuming 1 token = 4 chars).

Enter your text and desired token size below and get an output.




Output: