• kescusay@lemmy.world
    link
    fedilink
    English
    arrow-up
    169
    ·
    20 hours ago

    Long story short:

    • Some of the emails in the file dump had attachments.
    • The way attachments work in emails is that they’re converted to encoded text.
    • That encoded text was included - badly - in the file dump.
    • So it’s theoretically possible to convert them back to the original files, but it will take work to get the text back. Every character has to be exactly correct.

    Source: I’m a software developer and I’m currently trying to recover one of these attachments.

    • apftwb@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      65
      ·
      19 hours ago

      I’m a software developer and I’m currently trying to recover one of these attachments.

      🫡

    • apftwb@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      23
      ·
      18 hours ago

      Are you having as much trouble with OCR as the article author? I would have thought OCR was a solved problem in 2026 even with poor font selection.

      • Taldan@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        7 hours ago

        OCR is mostly good enough. Problem here is we have 76 pages that we need to be read perfectly, with a low fidelity input

        We also have very little in the way of error correction, since it’s mostly not human readable

      • kescusay@lemmy.world
        link
        fedilink
        English
        arrow-up
        17
        ·
        16 hours ago

        I’m not having trouble with it as such, it’s just a slow and painstaking process. The source is crappy enough that an enormous number of characters need to be checked manually, and it’s ridiculously time-consuming.

      • floofloof@lemmy.ca
        link
        fedilink
        English
        arrow-up
        7
        ·
        17 hours ago

        I wonder if they gave considered crowdsourcing this, having many people type in small chunks of the data by hand, doing their own character recognition? Get enough people in and enough overlap and the process would have some built-in error correction.

          • Kevlar21@piefed.social
            link
            fedilink
            English
            arrow-up
            10
            ·
            edit-2
            15 hours ago

            Not an expert at all but I’m genuinely curious how long it would take to check all possibilities for each I or 1? Is that the full length of the hash or whatever? So in this example image we have 2^8 =256 different possibilities to check? Seems like that would be easy enough for a computer.

            Edit: actually read the article. It’s much more complicated than this. This isn’t really the only issue and the base64 in the example was 76 pages long.

    • trolololol@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      14 hours ago

      Curious here, this is base 64? And what’s behind it is more often than not an image or text? And you need to do ocr to get the characters?

      Maybe for the text it could use a dictionary to rubber stamp whether that zero is actually a letter oh, etc etc?

      I’m curious to know what the challenge is and what your approach is.

      • kescusay@lemmy.world
        link
        fedilink
        English
        arrow-up
        13
        ·
        13 hours ago

        Yes, it’s base64. And what’s behind it could be anything that can be attached to an email.

        In this case, it’s a PDF. If the base64 text can be extracted accurately, then the PDF that was attached to the email can be recreated.

        The challenge is basically twofold:

        1. There’s a lot of text, and it needs to be extracted perfectly. Even one character being wrong corrupts it and makes it impossible to decode.
        2. As the article points out, there are lots of visual problems with the encoded text, including the shitty font it’s displayed with, which makes automating the extraction damn near impossible. OCR is very good these days, but this is kind of a perfect example of text that it has trouble with.

        As for my approach, I’m basically just slowly and painstakingly running several OCR tools on small bits at a time, merging the resulting outputs, and doing my best to correct mistakes manually.

        • trolololol@lemmy.world
          link
          fedilink
          English
          arrow-up
          4
          ·
          11 hours ago

          Ah yes pdf is a clusterfuck where anything is valid I think, so minimal redundancy.

          Text and image formats are way more lenient and are full of redundancies.