A well-written skill file can be a valid way to address this.
However, I would still expect this approach to be less robust than putting the instruction directly in the prompt. In practice, the prompt tends to retain the highest priority, while a skill can sometimes be diluted by newer instructions or by the surrounding data and context.
This is likely sufficient in most situations. But when the cost of failure is high, the most reliable option remains to place the critical instruction in the latest prompt IMHO.
Love this. I use Claude mostly for definitions and explanations of ratios & metrics. Haven’t done a whole lot of asking its opinions or ratings yet; just scratched the surface. This is enlightening and probably shouldn’t be surprising.
Thanks for the feedback Eric, really appreciate it.
I've tested these models quite a bit and honestly, what surprises me isn't the biases themselves, it's the quality of output these sometimes genuinely dumb systems are capable of producing. Then again, I get that same feeling with some humans, so maybe it shouldn't be that surprising after all.
Does writing a thoughtful skilk.md file and creating a thoughtful architecture address this instead of one-shot prompt?
A well-written skill file can be a valid way to address this.
However, I would still expect this approach to be less robust than putting the instruction directly in the prompt. In practice, the prompt tends to retain the highest priority, while a skill can sometimes be diluted by newer instructions or by the surrounding data and context.
This is likely sufficient in most situations. But when the cost of failure is high, the most reliable option remains to place the critical instruction in the latest prompt IMHO.
Love this. I use Claude mostly for definitions and explanations of ratios & metrics. Haven’t done a whole lot of asking its opinions or ratings yet; just scratched the surface. This is enlightening and probably shouldn’t be surprising.
Thanks for the feedback Eric, really appreciate it.
I've tested these models quite a bit and honestly, what surprises me isn't the biases themselves, it's the quality of output these sometimes genuinely dumb systems are capable of producing. Then again, I get that same feeling with some humans, so maybe it shouldn't be that surprising after all.