On the Free Republic Forum a poster by the name GAFreedom has posted a good overview of how a scan can explain most of the artifacts observed in President Obama’s Long Form Birth Certificate
I have yet to understand how opimization renders LAYERS.
I work for a copier company. Been working on the things the past 15 years. All copiers, when they scan a document to PDF, produce 6 or more layers in the PDF output, starting with a monochrome layer with most of the text on it and further layers with additional scanned data on top of it. Further, more artifacts occur if you are using scan-to-email instead of scan-to-folder.
The most recent statement by Mike Zullo of Sheriff Arpaio’s Cold Case Posse is that despite the replication claims, the CCP has conclusive evidence that the WH BC is forged. There is a sealed report by certified document examiner Reed Hayes that Zullo is at least partially relying on which has been submitted to a federal court in Alabama.
1) Yeah, I know all about Reed Hayes and his report. I don’t trust him and I don’t trust the report.
1A) Why don’t I trust him? He’s a certified “handwriting analyst”. Handwriting analysis, otherwise known as “graphology”, is pseudoscience that says it is possible to assess someone’s personality by analyzing their handwriting. It’s a load of crap not supported by experiments and evidence. So first, he’s certified in a bunch of hooey. Hopefully his forensic document analysis experience isn’t the same type of hooey.
1B) Why I don’t trust the report? The report is sealed. Why isn’t it public? How do we know the sealed report doesn’t say “There ain’t a dang thing wrong with this BC!” And the answer is, we don’t. We don’t know what’s in the report or why it’s compelling evidence. It could be crapola. Until we know, we can’t say.
Please see this Free Republic thread regarding claims that Xerox copiers can replicate numerous features of the White House BC pdf. “Xerox 7655 Overview Picture (Obot claims to replicate Obama LFBC pdf w/floating signature)
1) The White House does not use Xerox machines. They use Ricohs. Why? Because the White House, the CIA, Congress, and the Department of Defense all have Ricoh contracts for their copiers. I should know, we send techs to work on the machines all day. [NBC: GAFreedom is wrong, the government may have a contract with Ricoh but there is at least on Xerox 7655 in use]
2) A lot of the pro-Birthers over in that thread don’t know a thing about document automation and imaging, compression protocols in scanning (especially lossy protocols)…the ignorance is astounding. Like this statement:
it is not JUST the pdf layers, it is also what is ON those layers. You have black and white layers, greyscale layers, different fonts in different layers, different pixel resolution, ALL of which prove that it was put together from OTHER documents.
Anyone who is expert in how copiers scan know that’s a bunch of hooey. JBIG2 compression will do different layers like that, especially when scanning a color document. Or this crap!
With that kind of WHut insider connection, it wouldn’t be too far-fetched to have Xerox or Adobe programmers “update” their software to behave in the far-fetched manner required within limited circumstances. Someone needs have archived the software extant at the time of the original scan for the ostensibly-generating machine. At this point I wouldn’t trust company repository driver software not to have been similarly corrupted and back-dated. Such machines may automatically update their software via the Internet or via periodic company service packs, obliterating earlier software in those cases.
BULL! Firmware, especially for older machines, does not update that often. While updates CAN be performed through a web monitor, they’re an absolute pain in the ass that can take up to 3 hours per machine. Updates are mostly performed by SD card inserted into the machine by a technician, as that takes approximately 30-45 seconds to update. The scan compressions are hardly ever updated, usually remaining the same since the machine was manufactured due to hardware restrictions. And this idea of corrupting and backdating software archives is pure paranoia! That would cost MILLIONS to do.
It’s such BS that it pisses me off as a professional. I stand by what I say in regards to the scanning processes.
The cause of the layering and artifacts and halos and whatnot are due to two things. 1) The anti-counterfeiting technology that is mandated in all scanning devices in accordance with mandates from the Secret Service. 2) The High Compression PDF protocol that is used in copiers for scanning, known as the “JBIG2 compression” protocol.
I can reproduce, in an instant, the same “forgery” that Zullo claims simply by walking over to one of my test machines at work and scanning a birth certificate to email. For additional hilarity, here’s a big problem with JBIG2 compression
You are seriously going to argue that it will move the supposedly typed text around, but leave all the printed text in the exact correct position?
By what process can the algorithm distinguish between typewritten text and printed text so that it can tell which one it needs to screw up?
When the text is being scanned, it’s not paying any attention to the actual text at all. It’s paying attention to the shape of text being scanned. So it parses the shape of a letter, determines what it is, and assigns it to a position on the table. This way, the algorithm creates a dictionary/glossary of what each shape is. So if the string AABBCC occurs once and is given a label xyx, the next time that AABBCC occurs it is just replaced in the compressed string by xyx. So what is transmitted is the compressed string xyx; this is supposed to be re-mapped back to AABBCC at the received end.
The problem with JBIG2 is that it is a lossy compression and decompression protocol. It’s not a true copy; it’s an approximation thereof. An error in the compression and addition to the table will result in incorrect patch substitution. The algorithm doesn’t know about words, only ‘patches’, so it often breaks words into different looking style chunks. Like this:
And this is why JBIG2 compression schemes are being tossed out by the copier industry. Lossy compression was a godsend in the days of low bandwidth and limited HDD space, but it was never supervised properly and that means MASS amounts of documents scanned are highly incorrect – especially if they involve numbers or typed text.
In Science and Engineering, we refer to this sort of statement as “Hand Waving.” You haven’t really explained anything, you’ve just made some vague reference to an algorithm that does “Weird” things. It is not an answer to how typewritten text can be “weirded” while leaving the printed text in a completely normal condition.
If you need it spelled out for you then, instead of verifying yourself:
* JBIG2 is a lossy compression format
* Lossy compression can introduce errors since it will misread numbers and typewritten
* It’s an industry problem that was only discovered last year.
* All copiers made currently use JBIG2.
* It’s being gotten rid of because of these problems.
* Links explain it fine, so long as you know the tech.
I cannot comprehend the idea of an algorithm that will screw up only the typewritten text, but will leave the printed text completely normal.
Typewritten text is compressed and added to the dictionary by shape. Printed text is compressed and added to the dictionary by individual pixel, due to irregularity. The reason it was done that way was to increase scanning efficiency. See this paper from 2001. Unfortunately, like the difference between MP3s and FLACs, lossy versus non-lossy compression has its flaws.