FAQ

The purpose of this page is to answer some of the frequently asked questions, about how to implement teleradiology and how much it will cost.

The diagram below shows the basic process by which images are placed onto a computer, transmitted and viewed. The common questions follow.

First, the image is translated into a computer file. For films, that requires the use of specialized film digitizers which convert the film into a dense matrix of dots. For digital modalities, 1)the data can be directly outputted into a computer, 2)or the image on the console of the modality can be “grabbed” via devices called frame grabbers. Once converted into a computer file, the file can be compressed to reduce the amount of time to transmit the data. Then it is transmitted using regular phone lines, specialized phone lines, satellites, etc. After being received by the computer on the other end, the image is decompressed to turn it back into an image file. Then it can be viewed by the viewing application.

What is the resolution of the digitized image?

For the digital modalities, the resolution is the same as the console or the printed film. If a frame grab method is used, you’ll get the same 8 bits of color data as you would get from the film (i.e. 256 shades of gray). If you use a direct digital interface, such as DICOM-3, you could have all 12 bits of data (i.e. 4096 shades of gray). On film, the resolution is dependant on the scanner and may be selectable. The ACR recommends a resolution of 2.5 lp/mm, 10 bits deep for film digitization for primary needs. This translates into about 1800 x 2200 by 1024 shades of gray. Most film digitizers will output 12 bits of grayscale data (4096 shades of gray), but not all of those bits have “real” information. A good digitizer will give you 10 “good” bits, though.

What are the differences between film digitizers?

There are three basic technologies used: camera-on-the-stick, CCD and laser. Camera-on-the-stick shines a light through the film, much like an overhead projector and takes a picture of the film. Both cost and the quality are low. It isn’t recommended. CCD’s use a specialized fluorescent bulbs to shine through the film and CCD arrays as detectors. Lasers use a laser to illuminate the film and photomultiplier as detectors. The lasers do not have the “bleeding” – overlap from pixel to pixel caused by light scattering – seen in the CCD technology and they also have a larger dynamic range, mainly because the dark regions of the film get better illuminated. Laser are also more expensive than CCDs. The technologies are quite comparable in achievable resolution – both can support 4k by 4k.

Do I lose image information when I compress the data?

You can compress the data by about 2 or 3:1 without any loss of data; this is called lossless compression. Once you get above that ratio, there will be loss, regardless of the compression technique used. There are several well known methods of compression, including several newer methods, such as wavelet compression. They vary in the amount they compress data and the quality of the reproduced image, including the types of artifacts produced. For primary reads, we recommend low or lossless compression algorithms;. the ACR is silent on this.

How long does it take to send the image?

The charts below show the approximate times to transmit different types of medical images by various networks (28.8 modems, 56 kb, ISDN, T-1) using different compression ratios. Please note the scale is logarithmic!

You do not lose information by using a slower phone line it just takes longer for the information to arrive. Information is lost in compression, not transmission.