Median will be turned off on 6/1/18. Please use instead. Panopto training information is at

« go back to Help

Help - Video for the Web

The Short Version

Median accepts many video types and codecs, but only one of them will allow you to have your media immediately available for viewing: MP4 file format with H264 video and AAC audio. Additionally, the video must be less than 1900kbps total bitrate, and 720p (1280x720) or lower resolution. We recommend encoding your media for the web directly from your editing application, or exporting a very high-quality version and then using a program like Handbrake to transcode it to a web-friendly version. There is also a guide to using Handbrake for Median here.

The Long Version


This document is specifically tailored for users who are looking to format their video for use on the web, including platforms like Median, YouTube, and Vimeo. The major video hosting sites of the internet employ standards that are also shared by Median. However, Median does not have the resources of a large post-enterprise application like YouTube. Therefore users are implored to do a little extra work on their end to make their experience with Median more efficient. This guide demonstrates how to use Median in the quickest and simplest fashion possible, while explaining key concepts that affect the use of video on the internet in general.

For general use, content uploaded to Median is handled in an automated process that is invisible to the user. You simply upload your video, Median transcodes it (usually within 5 minutes), and it's ready to be viewed. For the more technically-inclined person, that process can be made a bit more transparent and hands-on by reading this document.

How Video Is Handled On The Web

Briefly explained, content uploaded to Median is sorted, analyzed, copied, transcoded, and then served in a secure fashion. The length of time to do this for audio and image uploads is very fast: it takes as long as uploading the media and filling out the Upload Wizard. Once you are done, images and audio are immediately available.

For video, the process involves an additional analysis and then transcoding. Video delivery systems like YouTube and Vimeo employ this process as well. When a video is uploaded, the media file is checked for what video and audio codecs are used (Quicktime, ProRes, MPEG, WebM, Windows Media, etc), how large the frame dimensions are (width and height), and how much bitrate is used (25 megabit/sec, 800 kilobit/sec, etc). These factors determine whether the file can be immediately viewed in a way that is accessible to the largest population of users, or if the video has to be transcoded to a more accessible format.

“Transcoding” is the process of copying the existing video file and re-encoding it to a different format, whether it’s resizing the video or changing it from Windows Media format to Quicktime format. Think of it as transferring video footage from a tape to a digital format. It needs to be done to every frame of video, so it takes time.

On the web, services like YouTube and Median cannot stream raw footage from a high-end camera. The bitrate is far too high, as most broadband users have around a 10 megabit per second connection. One of the standard shooting formats is DV25, which is 25 megabits per second, or ProRes, which can be anywhere from 20 megabits per second to 400 megabits per second. When you know these numbers, it’s easy to tell that 25 megabits per second of video cannot possibly fit through the 10 megabits per second your home connection allows. This is why services like Vimeo automatically transcode your video down to an acceptable level which will work for the broadest range of internet connections. YouTube, for example, encodes several different versions for different bandwidth options (you will notice the little box that lets you choose 480p, 720p, 1080p, etc, in YouTube’s player). Median features the same process.

The main difference between Median and services like Vimeo is that Median currently has only a handful of servers that transcode uploaded video files for users. Each can be doing one video at a time. YouTube, on the other hand, has thousands of servers encoding media at blinding speeds, meaning when you upload something it is encoded in as little as 30 seconds. On Median, the process can take anywhere from 5 minutes to 24 hours depending on how many other videos are being processed. This is why Median users are encouraged to do a little more work on their end to make sure their videos can be immediately accessible if time is a factor.

When transferring footage from a tape to a digital file, usually it is captured at a 1:1 ratio. For every second on the tape, it takes a second to transfer it to a digital file. Converting from a digital format to another digital format, on the other hand, is only limited to the computing power available. So converting a 30-minute DV25 file to a 800kbps H.264 file can take only five minutes on a fast computer, or it can take an hour on a slow computer.

For this purpose, Median has a “fast lane” option for video uploads. If you upload your video in the correct format, Median will skip that transcoding process entirely and make your video immediately available for viewing. However, getting your encoding process to work well is not easy for someone who has little to no experience with video codecs and bitrates. This guide aims to help you with that.

A Quick Glossary Of Terms

Codec - the algorithm used to compress either the audio or video portion of a file. Different codecs compress differently. Some codecs, like Quicktime Animation, preserve a lot of image quality and result in large files. Other codecs, like H.264, are designed to make a very small file that preserves as much quality as possible. MP3 is another codec you’ve probably heard of.

Bitrate - how much data is used per increment of time, usually seconds. There are various ways to measure it, but usually it’s in kilobits per second or megabits per second. This has nothing to do with the frame size of the video frame and has everything to do with the file size of the video.

Bits versus bytes - bitrates are typically measured in bits, not bytes. A megabyte is typically used to describe the size of a file, but megabit is commonly used for transmission speeds. A bit is a single 1 or 0, while a byte is eight bits. Really, you shouldn’t worry about it. Bitrate is about kilobits and megabits, the size of files are about kilobytes and megabytes. Bits are usually in lowercase and bytes are in uppercase: kbps is kilobits per second, KBps are kilobytes per second.

480p, 720p, etc - these are just shortcuts to describe a video’s frame size. 480 and 720 are how many horizontal lines of resolution there are. In the computer world, this translates to pixels (720p means 720 pixels high, even though this is still sometimes not true). The small “p” means that the video has progressive frames instead of interlaced frames. If you see a small “i”, that means the video is interlaced instead of progressive. Honestly, they don’t mean anything in the online world (and on Blu-ray movies) because technically contemporary video is made up of interlaced frames, full progressive frames, B-frames, P-frames, I-frames, and more, all working together to make the best picture possible.

The “Art” of Encoding

It’s not really an art, but it does require a certain level of finesse, it varies for every video, and generally requires some trial-and-error before you get the results you might want. However, there are best practices for almost every situation:

Frame Dimensions versus Bitrate

When encoding video, the encoder is in a continuous battle with how much space there is to fit each pixel (the height and width of the video frame) and how much the maximum allowable bitrate is per frame. Look at it this way: you have a ten-by-ten square that makes up 100 blocks total surface area. You can have a maximum of 50 blocks in that ten-by-ten square to make an image. How do you do it best? This is analogous to having a 10 by 10 pixel video frame (100 pixels total) and only 50 bits per frame of bitrate.

Now let’s make it bigger, more realistic: you have a 1080p24 video file, that’s 1920 pixels wide by 1080 pixels high at 24 frames each second. That’s around 2.14 million pixels, 24 times each second. And you have 800 kilobits per second, or 33.3 kilobits per frame. That’s 33,300 bits for all 2.14 million pixels each frame. There’s nowhere near enough storage in each frame to make a good looking video. So you have to change one of those variables to make it look better, whether it’s the frame size or the bitrate or the frames per second. And this is all in the effort of making a small, compact file that is easily streamed over the internet, so you can’t increase the bitrate.

Decreasing the frame dimensions is an easy way to fix that problem. Video resolutions of 720p and 1080p were never streamed by YouTube until 2008, and they arguably still do not have broadcast-quality bitrates to support it. Decreasing the video size from full HD quality 1920 pixels by 1080 pixels to DVD quality of 720 pixels by 480 pixels (or 480p) drops the number of pixels per frame from 2.14 million to 345,600. So when you have a bitrate of 33,300 bits per frame, the image starts to look a lot better. Yes, the frame size is smaller, but the image looks a lot better. You have to sacrifice somewhere for internet streaming video.

Video and Audio Bitrates

Video and audio both have bitrates, but in most viewing applications and in terms of streaming over the internet, they only matter as a total combined bitrate. This means that there is wiggle room in terms of what takes most of that total bitrate. The video will always have a vastly higher bitrate, typically audio bitrates for web video do not go above 128 kilobits per second, whereas video will go up to 2,000 kilobits per second. But when dealing with a limiting situation, like a constraint of 800 kilobits per second, an extra 50 kilobits for video might make the image look a bit better. Typically I encourage users to use 700kbps for video and 96kbps for audio. But you could easily change that to 750kbps for video and 64kbps for audio and hear no substantial difference. It’s worth trying.

Multi-pass or Two-pass Encoding

One of the least used but most beneficial ways to make your video look better is to use multi-pass (AKA two-pass) encoding. The explanation is simple: the encoder does a first pass of the entire video which merely analyzes each frame and determines how best to use those 800kbps of space you have for each frame. On the second pass, the encoder actually makes that happen. What’s even more important is that the H.264 video codec (which is the standard video codec for the web) provides predictive frames and B-frames. So that first pass not only analyzes each frame, but examines its relation to frames before and after it, and saves space based on that relationship.

For example, if you have a ten-second shot of an interview when the background never changes, just the person’s head, the first pass will recognize this. It will know that certain sections of the video don’t change for an extended period of time, meaning it won’t have to redraw them in each frame. The video will essentially have one frame of the background and then dedicate all the bitrate-per-frame to just the person’s face and their head movements, making the overall quality much better.

However, two-pass encoding has two drawbacks. First, because it is making two passes instead of one, the encoder will take anywhere from 1.5 to 2 times longer to do the whole process than if you had a single-pass encode. Secondly, multi-pass encoding sometimes doesn’t do much to help any fast-paced action where the video content is constantly changing. It helps a little bit, but the overall limitation of 800kbps can be too stifling.

Even with these drawbacks, multi-pass encoding is worth it, and most machines will run that first pass very quickly since it’s not actually encoding the media yet, it’s only looking at the file.

Recommended Transcoding Software

Let me reiterate that transcoding is about trial-and-error when you're starting out. It's about finding a workflow that is comfortable for you. I recommend exporting from whatever source you're working with (Final Cut Pro, Avid, etc) a high-quality version of your video and then using one of the following tools to transcode it for web-readiness. Many video editing programs have good export functions, some of which I'll cover here.

  • Handbrake is by far the simplest and quickest to use. It is available for Windows, Mac, and Linux. It is actually what Median's transcoding servers use to encode video since it is free and uses open-source encoders for H.264 and AAC, known as x264 and FAAC respectively. It is available on every lab computer on campus.
  • Quicktime Pro 7 is a decent and fairly straightforward encoder, available on all the Macs in computer labs on campus. Currently it does not have adequate support for multiple processor cores, so it can take significantly longer than using Handbrake. However, it uses the Apple encoders for H.264 and AAC, which sometimes give a small boost in overall image quality. Quicktime Pro 7 is officially not really supported on Snow Leopard and beyond, as it has been replaced by Quicktime X (which has no custom transcoding functionality) and it costs money if you'd like a personal copy.
  • Adapter is a very easy to use encoder, available on Mac and Windows. It uses the same kind of engine as Handbrake, so it produces similar results at a good speed.
  • MediaCoder is a Windows-only transcoding suite for desktop and devices. It's decent, but it's much more convoluted than Handbrake.
  • Compressor is built in to the Final Cut Studio suite, and is a valuable transcoder if you have the time to figure it out. It can be a bit complex, but seasoned Compressor users should be able to fit Median's parameters into it without a problem.

The Parameters for Median

That having all been said, here are the guidelines for Median instant availability.

  • File format/container: .mp4
  • Recommended total average bitrate: 1800 kbps
  • Video codec: H.264 (or x264 if you are using Handbrake)
  • Video frame: non-anamorphic, 1:1 pixel ratio
  • Audio codec: AAC (or FAAC or AAC CoreAudio if you are using Handbrake)
  • Audio sample rate: 44.1 khz

If it fits those parameters, it’ll be available immediately for viewing after uploading to Median. Note that these parameters may change in future versions, but what’s probably going to change is the recommended total bitrate. As broadband speeds grow, so do allowable bitrate speeds.

If You Have Any Questions

Please feel free to email me about anything relating to Median or video for the web by Contacting Us

2921 views, last updated 10/16/15.