Show HN: SmallDocs – Markdown without the frustrations

50 points | by FailMore 3 days ago

10 comments

  • FailMore 1 day ago
    A little update: I added privacy-focused optional shorter URLs to SDocs.

    You can read more about the implementation here: https://sdocs.dev/#sec=short-links

    Briefly:

      https://sdocs.dev/s/{short id}#k={encryption key}
                          └────┬───┘   └───────┬──────┘
                               │                │
                          sent to           never leaves
                           server           your browser
    
    
    We encrypt your document client side. The encrypted document is sent to the server with an id to save it against. The encryption key stays client side in the URL fragment. (And - probably very obviously - the encryption key is required to make the sever stored text readable again).

    You can test this by opening your browser's developer tools, switch to the Network tab, click Generate next to the "Short URL" heading, and inspecting the request body. You will see a base64-encoded blob of random bytes, not your document.

    • big_toast 2 days ago
      URL data sites are always very cool to me. The offline service worker part is great.

      The analytics[1] is incredible. Thank you for sharing (and explaining)! I love this implementation.

      I'm a little confused about the privacy mention. Maybe the fragment data isn't passed but that's not a particularly strong guarantee. The javascript still has access so privacy is just a promise as far as I can tell.

      Am I misunderstanding something and is there a stronger mechanism in browsers preserving the fragment data's isolation? Or is there some way to prove a url is running a github repo without modification?

      [1]:https://sdocs.dev/analytics

      • FailMore 2 days ago
        Thanks for the kind words re the analytics!

        You are right re privacy. It is possible to go from url hash -> parse -> server (that’s not what SDocs does to be clear).

        I’ve been thinking about how to prove our privacy mechanism. The idea I have in my head at the moment is to have 2+ established coding agents review the code after every merge to the codebase and to provide a signal (maybe visible in the footer) that, according to them it is secure and the check was made after the latest merge. Maybe overkill?! Or maybe a new way to “prove” things?? If you have other ideas please let me know.

        • big_toast 2 days ago
          No, I don't have any good ideas. Just hoping someone else does, or that I'm missing something.

          I think it's in the hands of browser vendors.

          The agent review a la socket.dev probably doesn't address all the gaps. I think you're already doing about as much as you reasonably can.

          • FailMore 2 days ago
            Thanks. The question has made me wonder about the value of some sort of real time verification service.
      • pdyc 2 days ago
        i also used fragment technique for sharing html snippets but url's became very long, i had to implement optional url shortener after users complained. Unfortunately that meant server interaction.

        https://easyanalytica.com/tools/html-playground/

        • FailMore 1 day ago
          (I left a stand alone comment, but:) A little update: I added privacy-focused optional shorter URLs to SDocs.

          You can read more about the implementation here: https://sdocs.dev/#sec=short-links

          Briefly:

            https://sdocs.dev/s/{short id}#k={encryption key}
                                └────┬───┘   └───────┬──────┘
                                     │                │
                                sent to           never leaves
                                 server           your browser
          
          
          We encrypt your document client side. The encrypted document is sent to the server with an id to save it against. The encryption key stays client side in the URL fragment. (And - probably very obviously - the encryption key is required to make the sever stored text readable again).

          You can test this by opening your browser's developer tools, switch to the Network tab, click Generate next to the "Short URL" heading, and inspecting the request body. You will see a base64-encoded blob of random bytes, not your document.

          • FailMore 2 days ago
            Really nice implementation by the way.

            Re URL length: Yes... I have a feeling it could become an issue. I was wondering if a browser extension might give users the ability to have shorter urls without losing privacy... but haven't looked into it deeply/don't know if it would be possible (browser extensions are decent bridges between the local machine and the browser, so maybe some sort of decryption key could be used to allow for more compressed urls...)

            • pdyc 2 days ago
              i doubt it would be possible, it boils down to compression problem compressing x amount of content to y bits, since content is unpredictable it cannot be done without having intermediary to store it.
            • mystickphoenix 2 days ago
              For this use-case, maybe compression and then encoding would get more data into the URL before you hit a limit (or before users complain)?

              I.e. .md -> gzip -> base64

            • moaning 2 days ago
              Markdown style editing looks very easy and convenient
              • FailMore 2 days ago
                Thanks! One potential use case I have for it is being able to make "branded" markdown if you need to share something with a client/public facing.
              • stealthy_ 2 days ago
                Nice, I've also built something like this we use internally. Will it reduce token consumption as well?
                • FailMore 2 days ago
                  Thanks. Re tokens reduction: not that I’m aware of. Would you mind explaining how it might? That could be a cool feature to add
                • Arij_Aziz 14 hours ago
                  This is a neat tool. I always had to manually copypaste longs texts into notepad and convert it into md format. Obvisouly i couldn't parse complex sites with lots of images or those that had weird editing. this will be useful
                  • FailMore 13 hours ago
                    Thank you. If you use an AI agent you might be able to tell it to curl the target website, extract the content into a markdown file and then sdoc it. It might have some interesting ideas with images (using the hosted URLs or hosting them yourself somehow)
                  • moeadham 3 days ago
                    I had not heard of url fragments before. Is there a size cap?
                    • FailMore 3 days ago
                      Ish, but the cap is the length of url that the browser can handle. For desktop chrome it's 2MB, but for mobile Safari its 80KB.

                      The compression algo SDocs uses reduces the size of your markdown file by ~10x, so 80KB is still ~800KB of markdown, so fairly beefy.

                      • tcfhgj 7 minutes ago
                        It's 2^16=65,536 bytes for Firefox
                      • vivid242 2 days ago
                        Hadn’t heard of it either - very smart, could open lots of other privacy-friendliness-improved „client-based web“ apps
                        • FailMore 2 days ago
                          TYVM. Yeah, I am curious to explore moving into other file formats like CSVs.
                      • pbronez 1 day ago
                        Cool project. Heads up - there’s a commercial company with a very similar name that might decide to hassle you about it:

                        https://www.sdocs.com/

                        • FailMore 1 day ago
                          Thanks + thanks for the heads up. I will see what happens. It's a domain-name war out there!
                        • adamsilvacons 2 days ago
                          [dead]
                          • fredericgalline 15 hours ago
                            [dead]