Does marginalia_nu not use embedding models as part of search? I guess I assumed it would. If you have embeddings anyway, decision trees on the embedding vector (e.g. catboost) tend to work pretty well. Fine-tuning modernbert works even better but probably won't meet the criteria of "really fast and run well on CPUs". That said, the approach described in the article seems to work well enough and obviously provides extremely cheap inference
It does not use any transformer models right now. I've made experiments with BERT-adjacent methods, but not found them fast enough to be useful. Basically, whatever approach is used, it needs to do inference at ~10us latencies to either make real-time result filtering viable, or <1ms not add unreasonable overhead to processing-time result labeling.
This was a very meandering project, and trying to corral it into some sort of coherent narrative was a bit of an undertaking on its own. Hopefully it makes some sense.
Hi Viktor! Really cool write-up, thanks! Uruky is already using the `nsfw` param, but set to `0` or `1`, and I see in your example this looks like a new value option (`2`) that's "better" than `1`? How "safe" is it to implement it as the value to send when someone wants SFW results?
Have you seen many examples of websites labeling themselves, perhaps using rating meta tags (<meta name="rating" ...>)? Self-labeling seems valuable in some ways, but I don't think I've seen it catch on.
1 filters 'harmful' sites per the UT1 blacklists
2 is 1 + the new NSFW filter.
The new filter works pretty good in my assessment. It's not infallible, but it gives significantly cleaner results.
And if you do find queries it fails to sanitize, I'd love to hear about them.
So I can make sure I know what sites to stay away from, of course