tyrstag wrote: First off, let me explain that I am not a skeptic. I have seen too much unexplainable stuff in my life to not be a believer. I just have a big problem with some things that are claimed to be evidence, but have perfectly reasonable scientific explanations.
I would agree, one must be very careful what one calls proof. Evidence is just a collection of data, does not really mean its valid or invalid. It just means its empirical data which has been collected. The collection of that data as a whole then must lead towards proof. I personally don't think any one piece of data by itself can constitute proof. But rather an overwhelming preponderance of positive data which can then collectively increase the probability of having found something valid.
You should be skeptical of ANY digital evidence. The reason is that we live in an analog world.
On this I would have to disagree. Evidence can be invalid or valid independent of whether its digital or analog. Actually in its raw form many digital evidence can far surpass any analog evidence. I do understand the concept of artifacts and loss due to compression and other digital factors. But as long as you keep those factors in mind when you select your means of collecting the data, and factor in the level of error caused by those the data can be every bit as valid.
For a moment consider this. A regular chemical photograph taken of something evidential. The same effects (motion blur, ghosting, lack of definition, etc) can effect the analog image just as they can the digital images. The same distortion you get by zooming in on a digital, you get by zooming in on an analog. This is true for video as well. And with audio evidence, recording on an old style tape recorder can actually lead to more distortion and background noise than their digital counterparts.
Also as far as purposeful manipulation of data goes (faking images), anyone who's ever had a real dark room will tell you that its just as easy (sometimes easier) to fake an analog image as it is with a digital one. I would not discount digital forms in any way. Actually digital images give you some internal information which can in some cases lend credibility to the photograph.
Digital images have embedded data which can tell the source of the image, time of day, and even the camera settings used when taking the image. Yes, this can be faked or duplicated but now you're greatly reducing the number of people who have the knowledge to do that.
As long as no compression is used on the original, a digital image, video, sound can be as good if not better than its counterpart analog. The true key is to know the capabilities and limitations of the equipment you are using. Not all equipment is created equal. Also if you do any processing on data, keep the original and know how the processing works. You MUST know how any image or sound processing will effect the validity of the data. Data integrity is #1 when it comes to wanting to use it as scientific proof.
Hardware malfunction: In computer graphics, visual artifacts may be generated whenever a hardware component (eg. processor, memory chip, cabling) malfunctions, causing data corruption. Malfunction may be caused by physical damage, overheating (sometimes due to GPU overclocking), etc. Common types of hardware artifacts are texture corruption and T-vertices in 3D graphics, and pixelization in MPEG compressed video.
True, but you also have to keep in mind that hardware level corruption generally appears much more dramatic than artifacts in images. Generally hardware level corruption will cause your computer to lock up. Most systems have error correction or detection. What you see mostly in images as the T-vertices is when an image is unconverted. Basically you're taking one pixel, the smallest dot which is defined within any image. And you are taking that dot and multiplying it to make it bigger. So one dot becomes 10-200 dots. So lets look at an example, 1 dot will become 20x20 dots. This in turn will make a large square in the color and brightness that the original dot was. If you look at such a picture it will look almost like a mozaic with many squares of colors. To reduce this effect some up-conversion software makes a best attempt at smoothing those edges. It does this by averaging the colors with those around them and then creating a soft transition. Somtimes even going out farther and looking for the edge of that color and creating a shape based off of it. Now you've started creating a phantom shape where before there was none. But again remember the computer is GUESSING based on the surroundings.
Although this entire process is great for finding detail in images that simply do not have the detail, the important factor is that its a best guess. And so the level of error starts to increase dramatically as you let the comptuer up-convert a smaller image into a larger one. There are other processes which induce levels of error. But to get a true idea of the level of error that any such process can induce you have to have a firm knowledge of what the computer does internally during image processing or enhancement.
Compression: Controlled amounts of unwanted information may be generated as a result of the use of lossy compression techniques. One such case is the artifacts seen in JPEG and MPEG compression algorithms.
I'm with you there, 100%! Compression is evil and although its great for making an image, sound or video into a more manageable size, it irreversibly loses data. If you then take that image and try to get the data back, you're now inducing error upon error. ALWAYS keep the original image, and save it with the least possible compression. I store all my images with no compression, my audio with minimal compression, and my video in raw form. (Makes for huge files)