It’s not out of the question that we get emergent behaviour where the model can connect non-optimally mapped tokens and still translate them correctly, yeah.
The concern is that the model doesn’t actually see the world in terms of distinct hexadecimals, but instead as tokens of variable size - you can see this using the tiktokenizer-webapp: enter some text and it will split it into the series of tokens the model actually will process.
It’s not impossible for the model to work it out anyway, but it is a reason for this type of task to be a bit harder on LLMs.
I understand how base models tokenize language. What I’m curious about you’re basing your response off a horrendously screenshotted meme image of someone interacting with deepseek. Is your concern that deepseek isn’t showing the code used to approach a hex string? Because that’s certainly a valid concern, though you can ask the model to output the code it is running. That’s definitely an ethics improvement that should be made in the UI, but it’s very clear what the model is doing under the hood
It’s not out of the question that we get emergent behaviour where the model can connect non-optimally mapped tokens and still translate them correctly, yeah.
I’m confused, is the concern when the model doesn’t properly identify when it is using software to identify something like a hex pattern?
The concern is that the model doesn’t actually see the world in terms of distinct hexadecimals, but instead as tokens of variable size - you can see this using the tiktokenizer-webapp: enter some text and it will split it into the series of tokens the model actually will process.
It’s not impossible for the model to work it out anyway, but it is a reason for this type of task to be a bit harder on LLMs.
I understand how base models tokenize language. What I’m curious about you’re basing your response off a horrendously screenshotted meme image of someone interacting with deepseek. Is your concern that deepseek isn’t showing the code used to approach a hex string? Because that’s certainly a valid concern, though you can ask the model to output the code it is running. That’s definitely an ethics improvement that should be made in the UI, but it’s very clear what the model is doing under the hood