Mpeg 4

Maybe when they release mpeg 4 receivers, they give us a little more memory so these little buggers don't choke on so many of the little mundane tasks we ask them to do... I just wonder, even though mpeg 4 is smaller (size-wise), i've heard people here say it'll take the same amount of time to process (ie: change channel), and that bugs me...
 
Ray_Air said:
So what does MPEG4 mean? The MPEG2 receivers will be obsolete and they can rape us for MPEG4 receivers?

MPEG is a compression algorithm for audio and video mpeg2 is 2nd generation and mpeg4 is 4th generation.

My understanding of the way compression works is that they say with mpeg4 they can squeeze more information per satellite then that tells me that the compression is greater which in turn says to me that the picture quality will not be as crisp as mpeg2.

Am I wrong or am I reaching and overthinking computer theory.
 
bpickell said:
MPEG is a compression algorithm for audio and video mpeg2 is 2nd generation and mpeg4 is 4th generation.

My understanding of the way compression works is that they say with mpeg4 they can squeeze more information per satellite then that tells me that the compression is greater which in turn says to me that the picture quality will not be as crisp as mpeg2.

Am I wrong or am I reaching and overthinking computer theory.
They say that the compression algorithm is better, so they will be able to put twice the number of compressed channels into the same bandwidth, at the same quality as today.
 
Yeah, I understand what they want to sell the public but lets look at it like this.

You take a picture with a digital camera and save it on your computer as a TIF file which by the way for those who don't know is an uncompressed format.

Then you save a copy as a JPG file (compressed format).

The TIF file will be quite large around 4 or 5 Meg depending on the resolution that you took the picture at.

The JPG file will be quite a bit smaller around 100K. (More content in the same space just like MPEG4)

Now print the picture @ 11X17 Lets see which one looks better.

Again I am theorizing this and comparing it to the computer world. Which I don't think is unreasonable. Considering it uses computers to do the compression.
 
bpickell said:
...
Again I am theorizing this and comparing it to the computer world. Which I don't think is unreasonable. Considering it uses computers to do the compression.
The key is being able to get all the information out of the compression that was put into it. Relative size of compressed files isn't necessarily an indication of the quality that can be extracted.

And this isn't really a compute intensive process. The STB manufacturers have included a chip set that performs all the MPEG-2/4 decompression functions. Software isn't a factor here, except in how the chips were designed.
 
mdonnelly said:
The key is being able to get all the information out of the compression that was put into it. Relative size of compressed files isn't necessarily an indication of the quality that can be extracted.

But there in lies the problem. When a file is compressed the algorithm actually removes information and throws it away to make the file smaller. Then the decompression utility "attempts" to translate as to what bits are missing and reinserts them. Which by the way works great with data but terribly with video. I don't know if you watch streaming video on the internet but that is what compression does to a picture.
 
bpickell said:
But there in lies the problem. When a file is compressed the algorithm actually removes information and throws it away to make the file smaller. Then the decompression utility "attempts" to translate as to what bits are missing and reinserts them. Which by the way works great with data but terribly with video. I don't know if you watch streaming video on the internet but that is what compression does to a picture.
No, nothing is "thrown away". A "good" compression algorithm looks for repeatable sequences that can be expressed with less bytes.

Look at a zip file. You can compress the average ascii file at about 75%, and get every piece of that file back as it was. Binary files with random bytes will not compress as well, but they do compress, and all the data is extracted to recreate the original. Try zipping up an exe file, then extract it and run the program. It works.

RAR does the same as zip, but the algorithm is better, so the compressed file size is smaller.
 
mdonnelly says:

No, nothing is "thrown away". A "good" compression algorithm looks for repeatable sequences that can be expressed with less bytes.

Look at a zip file. You can compress the average ascii file at about 75%, and get every piece of that file back as it was. Binary files with random bytes will not compress as well, but they do compress, and all the data is extracted to recreate the original. Try zipping up an exe file, then extract it and run the program. It works.

RAR does the same as zip, but the algorithm is better, so the compressed file size is smaller.

Please do some reading up on video compression algorithms. MPEG-2, MPEG-4 and VC-1 are lossy compression algorithms. Bits in to encoder != Bits out of decoder which is the very definition of a lossy encoder. However, they've gotten much, much better in the intervening years since MPEG-2 has come out, and even MPEG-2 has gotten better over the years since its introduction.

MPEG-4 and VC-1 both use different approaches to the compression than MPEG-2 and as such are able to use a higher level of compression.

Keep in mind the goal is to compress without noticable artifacts creeping into the equation.

The raw, uncompressed bit rate for HDTV is 1920 (height) x 1080 (width) x 24 (bit depth) x 30 (complete frames) per second. This translates to just under 1.5 Gigabits per second for raw, uncompressed HD. ATSC allows for a maximum bandwidth or roughly 19.2 Megabits per second with MPEG-2 encoding. That's a factor of ~75:1 compression. You simply cannot get that level of compression with a lossless compression algorithm.

So yes, information is in fact thrown away, however the algorithms are very smart about what is thrown away. The same occurs with Dolby Digital and DTS audio algorithms.

This isn't the computer world, with the use of Lempel-Ziv style entropy based compression algorithms. Here's a quick tutorial showing you that MPEG-2 and MPEG-4 are indeed lossy compression algorithms.

Cheers,
 
bpickell:

MPEG is a compression algorithm for audio and video mpeg2 is 2nd generation and mpeg4 is 4th generation.

My understanding of the way compression works is that they say with mpeg4 they can squeeze more information per satellite then that tells me that the compression is greater which in turn says to me that the picture quality will not be as crisp as mpeg2.

Am I wrong or am I reaching and overthinking computer theory.

They use newer and more efficient techniques which allows them to gain greater compression with less artifacts.

It isn't as simplistic as more compression = lower PQ because you aren't comparing the same algorithm.

Cheers,
 
justalurker said:
Yep. Hard not to have loss when real time compression is going on. One just has to pick a less lossy compression if they want a better signal within the bandwidth provided.

Once again, this is an overly simplistic answer. It reminds me of the audio people arguing that DTS is better because it uses more bits. Not necessarily true because you are comparing two different compression algorithms.

If we were to compare MPEG-2 at 12Mbits/second, and MPEG-4 @ 10Mbits/second on HD material, by your criteria MPEG-2 would look better as it has more bandwidth allocated.

Because MPEG-4 uses different techniques it is capable of more efficient compression and utilize a lower bandwidth. Therefore, the MPEG-4 encoding at the bit rate listed above would in all likelihood present a superior picture.


Cheers,
 
John Kotches said:
They use newer and more efficient techniques which allows them to gain greater compression with less artifacts.

It isn't as simplistic as more compression = lower PQ because you aren't comparing the same algorithm.

Cheers,

So I'm not totally off track, just a little. I have the basic theory just not all the facts.

I just can't see how you can get the same HD picture with a smaller size. Just using more common sense than actual facts I guess.

It was definately a good debate though..
 
Here's some facts on how mpeg2 and 4 save on file size. Algorithms are used to compare what changes occur between past and future frames. If you were to observe each frame manually you'll likely to see the majority of the pixels are the same or simular. For example the lips of a person is changing but not much changes in the background. Or perhaps the background is moving but only doing a pixel shift so it only needs to be concerned with new pixels entering and leaving. All similarities need only to be written once and only the parts of the picture that are different need to be saved. The result is complete frames can be animated using old picture data combined with just new updates. I believe that mpeg4 has more intelligent algorithms that can judge even further into the future frames.
Bottom line don't think of just compressing an entire picture but rather looking forward and back in time to determine the small differences between each frame and only storing that. If a video clip is made up of allot of completely different pictures changing rapidly then mpeg doesn't do such a good job and large blockyness occurs but in real life that's rarely the case.
 
Jergen:

Now we're starting to get somewhere. This is just scraping the surface, but it's a step in the right direction.

There are also full frames, which encode the entire data for the picture and difference frames which encode deltas. The number of full frames can dramatically impact overall data rate.

We haven't even begun to discuss the topics of how they go about frame to frame compression and the different techniques employed but that's a quite heady topic.

Cheers,
 
jergenf said:
If a video clip is made up of allot of completely different pictures changing rapidly then mpeg doesn't do such a good job and large blockyness occurs but in real life that's rarely the case.

What about sporting events. Would that not classify as rapidly changing, with cameras panning and backgrounds changing etc. I guess I'm thinking more along the lines as auto sports. Usually when I'm watching say fomula 1 or champ racing they usually don't just focus on one car per se', but they tend to jump from car to car and pan the field quite often. Although I haven't seen any champ races in HD either.
 
bpickell said:
What about sporting events. Would that not classify as rapidly changing, with cameras panning and backgrounds changing etc. I guess I'm thinking more along the lines as auto sports. Usually when I'm watching say fomula 1 or champ racing they usually don't just focus on one car per se', but they tend to jump from car to car and pan the field quite often. Although I haven't seen any champ races in HD either.
Not really, with car racing especially in HD, the panning effect in minimized because of the wide view. Also even with constant panning it's only the sides of the picture that have pixels entering and leaving that are of concern, which is really just a simple pixel shift algorithm. A sporting event that might be harder on mpeg is hockey because not only is there continuous panning but allot of vertical, diagonal and horizontal patterns to make judgements about. I believe mpeg4 has smarter algorithms for interpreting movement than mpeg2.

What I meant by rapidly changing images are completely different pictures that have nothing in common. Imagine a slide show changing 10 times a second. Mpeg is lousy at that.
 
John Kotches said:
justalurker said:
Yep. Hard not to have loss when real time compression is going on. One just has to pick a less lossy compression if they want a better signal within the bandwidth provided.
Once again, this is an overly simplistic answer. It reminds me of the audio people arguing that DTS is better because it uses more bits.
I believe you missed something in what I said - I'm not looking for more or less bits I'm looking for a better signal.
John Kotches said:
Because MPEG-4 uses different techniques it is capable of more efficient compression and utilize a lower bandwidth. Therefore, the MPEG-4 encoding at the bit rate listed above would in all likelihood present a superior picture.
Which is good - because MPEG4 is the direction that E* and D* are going.

JL
 
John Kotches said:
Jergen:

Now we're starting to get somewhere. This is just scraping the surface, but it's a step in the right direction.

There are also full frames, which encode the entire data for the picture and difference frames which encode deltas. The number of full frames can dramatically impact overall data rate.

We haven't even begun to discuss the topics of how they go about frame to frame compression and the different techniques employed but that's a quite heady topic.

Cheers,
Basically when the first frame is scanned it then searches for future frames before even compressing the first. If the future frames are simular it searches for differences until it detects a frame that has nothing in common. In that case the following process repeats.

It divides the subject frame into zones in attempt to detect objects in the foreground versus background. Assuming the background isn't really changing in then compress that once then concentrates on the foreground portions writing only the changes. Sometimes they will be changes in the background like maybe people running. It uses predictive algorithms to either subdivide the small movements or just apply straight compression on the major portions of the frame.

In my previous statement I stated that completely different frames may pose problems (ie. rapid slide show) That is not entirely true. If for example three completely different frames were displayed in a second you really have 10 repeat frames of the same picture. It only has to write/compress 1 frame and zero changes for the next 9 so over the course of 30 frames (1 second real time) just provides data for 3 frames had to write.

Typically with normal video the complete background data need only be stored once. Remember it can obtain complete background by viewing all frames from that scene, even if the background is panning or the subject is moving within the background. The foreground is handled much the same way for example a person's face doesn't generally change all that much. Just eyes blink, lips move, etc but just those changes need updates.

When it builds the actual video in real time it uncompresses all the data from the background and animates the foreground. The overall purpose of the compression is to package it in a way that it can explain the changes that occurred and build real time frames from it. This technique saves on file size in most every case compared to straight compression of entire frames scanned.
 
When DirecTV started they were MPEG1 (roughly VCR quality). When they went to MPEG2 did people have to get new receivers? I have a DSS information and installation book written in 1995 and it does say DirecTV was originally MPEG1 at first.
 

Users Who Are Viewing This Thread (Total: 0, Members: 0, Guests: 0)

Who Read This Thread (Total Members: 1)