3D rendering services, being an extremely versatile utility for producing computer generated imagery, are offered to a wide range of clients, from simple architectural students to expansive film studios with highly valuable external contractor partnerships.

However, much like many other services in similar industries, the particular monetary cost of 3D rendering is directly related to the level of quality that the end product may be as well as the complexity of said product.

Generally, most ordinary 3D rendering projects can come at a cost between $100 USD to $10,000 USD, an extremely wide range owing to the subjective nature of 3D rendering, the individuals involved thereof and the differences in rendering methods between 3D rendering artists.

Oftentimes these prices are carefully calculated and measured so as to provide the client the best quality end product for the level of compensation given, sometimes leaving the freelance renderer or rendering firm with only a thin margin of profit after operating costs are factored in.

Why is 3D Rendering Considered Expensive?

While the exact definition of expensive is entirely subjective and may in fact be quite an acceptable price for the sort of rendering project being completed, there are several factors behind the reasoning of these rendering prices.

The first of which is the particular level of manpower and unique skill involved in producing 3D rendering projects. 3D rendering, though it is facilitated through the use of specialized software and automation scripts, usually requires at least one engineer or render artist to be present so as to act with creative and technical agency during the rendering process. 

This is especially accurate in 3D rendering for sectors such as the video game industry and similar graphically intensive media, wherein one or even an entire team of professional render artists may work on a single scene so as to provide a photorealistic or otherwise high quality visual experience.

3d render example

Apart from the manpower and skill used in creating high quality 3D renders, there is also the matter of the operating costs, both for the equipment used in said rendering project and in the management of an organization dedicated to 3D rendering.

3D rendering, especially completed within a reasonable length of time, requires significant processing power that is usually provided by top of the line computer systems equipped with specialized hardware such as graphical processing units and multi-threaded central processors.

These computer systems, being taxed to a crucial level, will draw large amounts of power so as to continue functioning at their maximum or near maximum capacity, and as such not only does using a 3D rendering system require a lump sum of money to build but also requires operating costs in the form of maintenance, cleaning and power usage.

How Much Electricity Does 3D Rendering Use?

Just like the particular monetary cost of 3D rendering, the electrical energy usage involved in creating a 3D rendered scene depends on the complexity of the project as well as the type of hardware used as a rendering system during the process.

Certain types of inefficient hardware parts may draw more power than is needed, especially if utilizing outdated software or performing in a capacity not ordinarily meant for the particular piece of hardware.

This is demonstrated well in cases such as the usage of true ray tracing calculations in graphical processing units that are not equipped to handle said calculations, wherein if the processor does not return an error, it will utilize large volumes of power to perform the complex calculations instead.

Generally, a non-industry standard computer system used by a freelance render artist or home amateur will only use approximately 600-800 watts per hour of rendering. 

This figure, however, does not consistently apply to all forms of render projects, as stronger computer systems will likely utilize more energy per hour at their maximum capacity. 

Keep in mind that the relative maximum capacity of a rendering computer is dependent both on the hardware it consists of and the software it is using, and as such it becomes difficult to estimate the exact amount of electrical power that will be consumed over the course of a rendered project.

Can You Create a 3D Render with Your Home Computer to Save Money?

3D rendering, while being a professional endeavor that oftentimes requires study and years of experience to master, is also completely available to amateurs that wish to explore the subject matter or to even begin working as a 3D render artist themselves.

Even using rudimentary home computers that are not specifically built for the purposes of rendering, it is entirely possible to create basic 3D renderings with the help of a few online tutorials, allowing amateur 3D renderers to create projects that they may require for little to no cost.

computer for 3d rendering
Computer being used for 3D Rendering

However, for 3D rendering projects that require a professional level of quality or must be completed within a certain length of time, it is best to outsource the work to external independent contractors or a firm specializing in computer generated visual works.

This is particularly important in cases wherein photorealistic graphical effects must be achieved in order to create the desired effect, such as in mass produced entertainment media or advertisements.

What Factors Can Affect the Cost of 3D Rendering?

While there are doubtless a multitude of factors that come into play when discussing the particular financial, energy and manpower costs of 3D rendering, some of these factors are larger contributors to these costs than others, and as such may be manipulated to achieve the desired effect.

Types of 3D Rendering Compensation Models

Generally, 3D rendering is paid either on a by rendered frame basis wherein the client compensates the 3D render artist or firm with each image or frame that is produced. This is usually done in instances where the client only requires snap-shot images of the render, or even multiple scene render images.

However, it is also possible for a client to instead pay the render artist or firm on a per project basis, with monetary compensation being dispensed once the entirety of the render project has been completed. This is more commonly seen in large-scale projects with a multitude of rendered frames, with the final output most likely being some sort of video or animation.

Project Complexity

As previously mentioned in this article, the scope and complexity of a 3D rendering project is one of the defining factors that are brought to the table in concerns of the particular costs required in order to facilitate the completion of said render project.

This includes characteristics of the project like number of frames being rendered, the type of rendering and what sort of effects or computations will be included, as well as the particular processing power that will be required in order to complete the render.

Manpower Requirements

Considering the fact that the majority of 3D render projects involve the use of custom 3D models produced by the 3D render artist or individuals employed within the render firm, it is entirely possible that particularly large scale projects with a multitude of modeled objects in its scenes will require quite a few individuals to create said models.

Apart from the creation of 3D digital objects, additional manpower may also be required if multiple scenes are being rendered concurrently or in the case of multiple plans being implemented simultaneously for the project.

Timetable of the Render Project

In the event that the client’s 3D render project requires a deadline be met with only a relatively short length of time available, it is likely that more rendering systems will need to be dedicated to the completion of the project, necessitating a higher premium as more power and work hours are consumed.

This is mostly applicable to projects with multiple scenes or frames being rendered, as single frame or image renders often do not take very long at all, and most likely will only require a single processing machine be occupied by the rendering operation.

Should You Hire a 3D Rendering Freelancer or a Rendering Company?

While there are doubtless a multitude of independent 3D rendering contractors with equal or greater skill level than those that are employed beneath rendering firms, it is usually best to contract to bigger organizations that specialize in large-scale projects if the render project to be completed is of a particular complexity - these organizations also likely make use of a render farm to complete these contracts.

This is due to the fact that large 3D animation works or similar rendering projects oftentimes require a full complement of 3D modelers, experienced 3D rendering artists, software engineers, hardware technicians and many other types of support staff in order to facilitate a large rendering undertaking.

However, 3D rendering freelancers often present lower premiums and require less compensation due to their lack of an affiliation with any large corporation or firm, reducing their operating costs and allowing them to offer lower prices for similar levels of quality.

3D rendering freelancers are best utilized for smaller rendering projects that do not require significant volumes of manpower and support.


1. 'Designblendz Team’ (August 15 2019) “Key Factors that Affect the Cost of Your 3D Rendering” designblendz designblendz.com

2. Badler, Norman I. "3D Object Modeling Lecture Series" (PDF). University of North Carolina at Chapel Hill.

3. Geng W. (2010) Artistic Rendering for 3D Object. In: The Algorithms and Principles of Non-photorealistic Graphics. Advanced Topics in Science and Technology in China. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-04891-3_5

The term render farm, in the context of computer-generated image rendering, refers to a set of computers of particularly high computing performance that are most often used to produce visual effects in cinema, three-dimensional modelling for architectural demos, and even cartoon animation frames.

Render farms are primarily created or rented by animation studios and architectural firms so as to facilitate their work, allowing a great deal of computations to be performed in a relatively short amount of time and at a level of which cannot be easily replicated by any one single computer system.

A render farm, by technical definition, is a set of computer nodes configured to compute at a parallel level, wherein animation and rendering is produced on a frame by frame basis utilizing the separated but organized processing power of each individual computer system.

How Does a Render Farm Work?

A render farm is primarily facilitated by the delegation of computational tasks to each individual computer that is part of the render farm network. 

Generally, a render farm queue managing software will instruct each computer as to the tasks required in order to render the animation, usually in the form of a single frame at a time. This automated form of delegation allows the render farm- at least to our human perspective- to perform the extremely complex calculations needed for per-frame sophisticated image rendering to occur nearly simultaneously.

Even in the event that the render farm’s computers have not been tasked with creating an image frame, their processing power may still be used in a variety of other ways, such as tasking them to render simply a small portion of a frame, several frames at once, or something with a non-visual output, such as physics engine simulations.

Unless the particular entity in control of the render farm has created the computer system network themselves, it is likely that the render farm is run and managed by proprietary software generally found to be server-side and otherwise only receives external input through controlled means from the client says Render Vision, a Melbourne-based rendering studio.

This is all the more applicable with the development of high-speed network render farms that do not require the client to be physically present with the render farm and may instead simply upload their unrendered work to a cloud storage network.

If the client has not simply purchased the render farm outright, it is most likely that the rendering farm’s proprietary owners will grant temporary software licenses to the client, especially if the farm must be physically set up away from their usual base of function, so as to facilitate legal use of the render farm and its computer programs.

What Is Render Capacity?

Render capacity refers to the total processing power capable of being exerted by the render farm when fully operational. This particular statistic is of vital importance both to clients and the render farm’s firm as it refers to the speed and efficiency of which the render farm is able to work at.

A variety of factors may affect the particular render capacity of any one render farm system, with things like an excessive workload, insufficient processing power or even simply the ambient temperature of the server room adding to the length of time it may take to complete a single render operation.

Naturally, the relative quality of the rendering operation and its subsequent complexity will also increase the exact length of time it will take to bring to completion, with higher resolution images or three-dimensional renderings with excessive levels of post processing effects taking a significantly longer time to finish than simple two-dimensional single frame renderings.

As technology progresses, so do the industry standards for rendering capacity, with what was once a state of the art rendering farm being reduced to an outdated processing system unable to keep up with newly developed technology. 

As such, rendering farms are continuously updated and its related software optimized to the absolute maximum. Even at a physical level, parts are often switched out as soon as new technology is approved for use in the rendering industry.

How are Render Farms Managed?

At a software level, render farms dictate the process of rendering through specialized software referred to as “queue managers”, either built into the particular rendering software the farm is utilizing or through an external program that acts directly on each individual computer in the network.

While this particular type of specialized software is not exactly needed, usually in relatively small render farms, it is far more efficient than manually dictating tasks to each and every local machine connected to the system. 

Local or On-site Render Farm Management

This is especially true in render farms that use a loose network of machines that do not completely involve local machines, wherein factors such as slow connection and hardware differences make manual task dictation difficult.

However, if the render farm has all or most machines assembled and present on site or in close enough proximity to each other that a high-speed internet connection is not needed, queue management software is an excellent dictation solution.

As previously mentioned, render farms do not always need to dictate a frame render, and can otherwise command a computer to perform a variety of tasks that do not directly involve the compounding of a frame or post process effect.

The queue manager software is most often present on every single machine that is part of the network as well as an off-site or separate server that performs the computations required to dictate the tasks to said machines. This is done both to facilitate network communication between the computers as well as to reduce the processing impact of the software on the rendering farm’s primary rendering computers.

Cloud Render Farm Management

However, in the case of render farms that do not have all of most of their processing computers physically present on site or within close enough proximity to one another, certain options are presented that both facilitate the function of the render farm as well as communication between the machines and their prospective owners.

One of the key features related to these off-site render farms is an altered form of compensation billing that is given to the client, with advantages such as usage statistics and direct processing time and power utilization being reported both on the invoice and as an informational disclosure to the client that may lease the service.

Another key feature is the concept of crowd sourced processing power, wherein users may donate or be compensated for the usage of their personal machine as part of a network that utilizes their combined hardware to form a disjointed cloud render farm, much like a zombie botnet without the malicious implications involved.

However, this is quite uncommon, owing to the difficulty in synchronizing non-specially purposed processing machines with widely differing software and hardware. Apart from this, a crowd sourced render farm is also considered to be quite insecure, both for the client and the users’ machines.

Do You Need a Render Farm?

Choosing to utilize a render farm in order to facilitate production of a rendering project or similar special use project is oftentimes an excellent choice, as it will not only safe the client money but also time, as the vast majority of render farms are run by experienced professionals with redundancies in place if any unfortunate events may come to pass.

This is especially paramount in the case of projects with hard deadlines wherein any late production of the product may be disastrous for the client. The use of a render farm not only accelerates the process of rendering but also completes it in a far more professional manner than would be possible for an amateur renderer.

How Long Do Render Farms Take?

While the particular time any type of rendering project will be completed depends on a variety of factors such as post processing effects, complexity, as well as the total available processing power related to the client, generally render farms are far more efficient and swifter than that of any ordinary lone computer.

Keep in mind that rendering is generally done on a frame to frame basis wherein a single machine is tasked with creating one frame at a time, meaning that the parallel processing accomplished by using multiple rendering machines at once far outclasses that of any single rendering machine, even if said machine is technologically more advanced to an extent.

Why Do Computers Take a Long Time to Render?

Though it is true that the length of time any one computer will take to render a scene or frame depends on its particular hardware and the complexity of the render, it is without a doubt one of the most memory and processor intensive activities a computer or network of computers may perform.

This is due to the fact that photorealistic rendering involves a massive level of detail that the untrained eye is not able to discern but can immediately identify if it is missing in a render. Such things like reflections, light diffraction, anti-aliased shadows and raytracing are all paramount to creating a realistic and aesthetically pleasing render.


1. Flavell, Lance (2011). Beginning Blender: Open Source 3D Modeling, Animation, and Game Design. Apress. p. 374. ISBN 9781430231271.

2. J. Ruby Annette, W. Aisha Banu, P. Subash Chandran, Rendering-as-a-Service: Taxonomy and Comparison, Procedia Computer Science, Volume 50, 2015, Pages 276-281, ISSN 1877-0509, https://doi.org/10.1016/j.procs.2015.04.048.

3. Yao, Jiali & Pan, Zhigeng & Zhang, Hongxin. (2009). A Distributed Render Farm System for Animation Production. 5709. 264-269. 10.1007/978-3-642-04052-8_31.

We hear sounds, see light and colors, feel warmth radiating from a fire. All these are waves, except they are of different wavelengths. While sound waves require a medium to travel through, light can also travel through a vacuum, but they are both still waves.

All waves can be represented by a waveform. A waveform gives us a visual representation of the wave’s amplitude and frequency, so we can gauge its intensity as well as where it lies within the mechanical or electromagnetic wave spectrum.

Despite the differences between the way sound and light waves travel or propagate (particles in a sound wave move parallel to the direction of travel of the wave and is referred to as a longitudinal wave, particles in a light wave move in a direction perpendicular to the direction of travel of the wave and is referred to as a transverse wave), they are still both defined in terms of their amplitude and frequency.

Sound Waves

Waves are energy, transferring from one point in space to another. When we refer to a sound, we indirectly refer to its frequency by indicating how low or high pitched the sound is, and its amplitude by how loud or soft it is.

Similarly, when referring to visible light, the frequency of the light will determine which color we see, with red being at the low end and violet at the high end of the spectrum, while amplitude indicates its brightness.

A wave is created when a vibrating source causes a periodic disturbance in the initial particle in a medium. It in turn disturbs the next particle, which disturbs the next, and so on, causing a wave to propagate along the medium as energy moves or transfers from particle to particle.

Along the wave, each individual particle vibrates at the same frequency as the original source. Hence, the period of vibration of each particle within the medium, is therefore also equal to the period of vibration of the source.

What is Frequency?

The unit of measurement of frequency is the hertz (Hz). Because of the great range of frequencies from the low end of the spectrum (sound waves) to the high end (gamma rays), kilohertz (kHz), megahertz (MHz), gigahertz (GHz), terahertz (THz), etc., are commonly used.

Learn More: Why are Ultrasonic Frequencies Not Audible to Humans?

One hertz is one full cycle or one complete oscillation of the wave completed in one second. If we take a sine wave as an example, a complete cycle is when the wave starts at zero (or at rest), ascends to a positive peak, then descends through zero down to a negative peak, and back to the original state at rest.

If several cycles of a wave occur in a second, then the number of cycles (expressed as cycles per second) is also the frequency of the wave. Thus, the frequency of a wave can be expressed as 10 cycles per second (10 cps or 10 c/s), or as 10 Hertz (10 Hz).

sound frequency

The amount of time required to complete one cycle, is the period of the wave (i.e. it is the inverse of its frequency). If a wave has a frequency of 10 Hz, it completes 10 cycles in one second, and therefore one cycle has a duration of one tenth of a second, which is its period.

When referring to waves at the extreme high end of the electromagnetic spectrum, wavelength rather than frequency is often used. Wavelength in reference to frequency can be calculated with the formula λ = v/f (where λ = wavelength, v = speed of the wave, f = frequency).

Wavelength and frequency are directly related, so that if a wave’s wavelength increases, its frequency decreases, and vice versa, while speed remains constant.

What is Amplitude?

While frequency measures how many wave cycles occur in a specified period of time (typically a second), amplitude is a measure of a wave’s intensity. If we assume that the same musical note is played loudly and then softly, while it’s frequency does not change, its volume, intensity, amplitude, call it what you like, does.

A yellow light source can shine dimly or brightly. Again, it’s frequency remains the same and it still projects a yellow beam, but the intensity or brightness of the light changes. So while with sound we call it volume, and for light brightness, in all cases we are referring to one and the same property, the wave’s amplitude.

The amplitude of a wave corresponds to the maximum distance a particle is displaced from its state of rest, along the medium in which the wave is traveling. Referring to our earlier definition of a wave, namely that “waves are energy, transferring from one point in space to another”, amplitude therefore represents the wave’s energy.


If we look at the waveform of sine wave, amplitude is the distance between the crest (positive peak) and trough (negative peak) divided by two. Since it is a measure of displacement, its value is expressed in meters.

What are the Differences Between Frequency and Amplitude

If we refer to the graphical representation of a waveform (e.g. a sine wave), typically, frequency is represented by time plotted along the x or horizontal axis, and amplitude along the y or vertical axis. Changes in a wave’s frequency could then be represented by cycles in the waveform that are more contracted or expanded, depending on whether the frequency is higher or lower respectively.

Amplitude on the other hand, would produce taller peaks and deeper troughs, representing an increase in the amount of energy of the wave. While both amplitude and frequency are key properties of any wave, they are not interdependent.

Unlike frequency and wavelength, where a change in the wavelength will have an inverse effect on the frequency (i.e. the larger the wavelength, the smaller the frequency), a change in amplitude will not have any effect on the frequency of the wave.

Another consideration is how waves are affected by the medium through which they travel, and what happens as they pass from one medium to another. What remains constant is the frequency of a wave, which does not change from medium to medium. So what does change.

The speed of sound through air is around 343 meters per second (m/s), whereas it travels at 1,481 m/s in water, and 5,120 m/s in iron. Unlike light waves and other electromagnetic waves, sound cannot travel through a vacuum. This is why sound waves are also referred to as mechanical waves.

In contrast, the speed of light through the vacuum of space is around 300,000 kilometers per second. It drops to 225,000 kilometers per second in water and 200,000 kilometers per second in glass. Hence, the speed of a wave is affected by the medium the wave travels through. Unlike sound waves, light waves can travel through a vacuum and are referred to as electromagnetic waves.

From the formula λ = v/f, we can see that since the speed of the wave changes as it passes from one medium to another, yet the frequency remains constant, then it stands that the wavelength will also change. When a wave travels from one medium into a denser medium, it slows down in speed and therefore its wavelength decreases.

Amplitude is also affected when a wave passes from one medium into another. Some of the wave’s energy is reflected, and so the amplitude is damped or attenuated. Furthermore, when a wave is traveling through the medium, its amplitude is attenuated over distance, due to scattering and absorption.

Since waves transfer energy, with mechanical waves such as sound waves, the transfer of energy of the wave is dependent on both the wave’s amplitude and frequency. Since each cycle carries some quantity of energy, the more cycles per second (i.e. the higher the frequency), the more energy that will be transferred.

This is why low pitched sounds tend to travel further than high pitched. High pitched sounds complete more cycles per second and so use up or require a lot more energy. With electromagnetic waves such as light, the energy transfer is only dependent on the wave’s amplitude and is independent of its frequency.

Summing up, we can conclude that the difference between frequency and amplitude is that frequency remains constant, irrespective of the type of wave (whether it be mechanical or electromagnetic), or the medium through which the wave travels.

In contrast, the amplitude of a wave is affected by distance and the medium through which the wave travels. In the case of mechanical waves, amplitude is also affected by the frequency of the wave. It could be said that frequency determines how a wave transfers energy, while amplitude how much energy is transferred.

The human ear is capable of hearing frequencies from 20 Hz up to 20 kHz, although this can vary from person to person, and also deteriorates with age. While younger adults are more able to hear frequencies approaching, and even surpassing 20 kHz, for those of a more advanced age, the upper limit of human hearing tends to fall well short of the 20 kHz upper limit.

Any frequencies above 20 kHz are referred to as ultrasonic. But to understand why ultrasonic waves are not audible to humans, we must first understand how sound is created and travels (or propagates), and how the human ear picks up and handles sound.

What is Sound

Sound is a mechanical waveform that requires a gas, liquid, or solid medium in order to propagate. This implies that unlike electromagnetic waves (such as light), sound cannot travel through the vacuum of space (or indeed any vacuum).

Sound propagates through the medium as a longitudinal wave, in which the compression and displacement of particles takes place in the same direction as the direction of travel of the wave. Sound can be quantified through several properties:

Wavelength, period and frequency are interrelated. Think of them in terms of vehicles passing a check point. The speed of all vehicles is the same and constant (just as is the speed of sound). The wavelength is the length of each vehicle, from front to rear.

The amount of time taken from when the front of a vehicle crosses a given checkpoint, to when the rear clears that checkpoint, is the period and is obviously dependent on the length of the vehicle. The longer the vehicle, the greater the period (of time) required to clear the checkpoint.

The frequency, is how many vehicles of the same length, if they were placed in series one behind the other, could pass the checkpoint in one second. Again, since the speed of all vehicles is the same and constant, the longer or shorter the vehicle, the fewer or greater the number of vehicles that will pass the checkpoint, and hence, the lower or higher will be the frequency.

Most sounds, of course, are very complex waveforms composed of many frequencies. A single note played on a musical instrument (for example, the note A), comprises the fundamental frequency, and an infinite series of harmonic frequencies. It is these harmonic frequencies that give each instrument its distinctive sound. The same note has a different sound on a flute than on a violin or trumpet.

The Human Ear

The human ear works by converting sound energy to mechanical energy, and then to a nerve impulse that is sent to the brain. The human ear consists of three sections - the outer ear, the middle ear, and the inner ear.

Sound waves enter the outer ear via the pinna (or auricle), travel down the ear canal and eventually to the eardrum, making the eardrum vibrate. The eardrum (or tympanic membrane), is extremely sensitive to sound vibrations, being able to detect the faintest of sounds and the most complex of audio patterns.

ear anatomy

The eardrum passes the sound vibrations on to the auditory ossicles of the middle ear. The auditory ossicles comprise three tiny bones, the malleus, incus, and stapes, connected in series, which work to amplify the sound vibrations before transferring them to the inner ear.

The malleus, the first of the three bones, receives any vibrations from the outer ear, and the stapes, the last bone in the chain, connects to the oval wall of the inner ear, where vibrations from the stapes create ripples in the fluids contained in the cochlea.

The inner ear is located between the middle ear and the internal acoustic meatus, and is made up of the bony labyrinth and membranous labyrinth. The bony labyrinth consists of a series of bony cavities comprising the cochlea, vestibule and three semi-circular canals, all filled with a fluid called perilymph.

The membranous labyrinth is filled with a fluid called endolymph, and is made up of the cochlear duct, semi-circular ducts, utricle and the saccule. The inner ear is innervated by the vestibulocochlear nerve, entering the inner ear via the internal acoustic meatus, and dividing into the cochlear nerve, responsible for hearing, and the vestibular nerve, responsible for balance.

Within the cochlea, hair cells (the sensory cells of the auditory system), respond to sounds based on their frequency. Any incoming sound waves create ripples in the fluid inside the cochlea, provoking a deflection of the hair cell stereocilia and creating electrical signals in the hair cells.

High-pitched sounds will stimulate the hair cells in the lower part of the cochlea, and low-pitched sounds in the upper part of the cochlea.

When hair cells detect a frequency to which they are tuned to respond, they generate nerve impulses that are transmitted along the auditory nerve. These nerve impulses follow a complicated pathway in the brainstem before reaching the auditory cortex, the hearing center of the brain, where they are converted into meaningful sound.

Why are Ultrasonic Waves Not Audible to Humans

The human ear is a very delicate, complicated and remarkable organ. When sound waves enter the ear, they are converted into vibrations that are passed along the ear canal, through the middle ear and on to the cochlea, where hair cells respond to frequencies they are tuned to detect, converting them to impulses that are sent to the brain.

These hair cells therefore, determine the human audible range. Any frequencies not picked up by the hair cells, are effectively filtered out. This is a natural, physical limit, and the range of human hearing is nominally accepted as being 20 Hz to 20 kHz, with any frequencies above 20 kHz being referred to as ultrasonic.

Of course, not everyone can hear at the extremes of the human auditory range. In general, young humans have better hearing than the elderly, being able to hear frequencies at, or even just beyond the boundaries of human hearing.

Side-note: It is for this reason that ultrasonic waves/frequencies can be used for pest control. For instance, gophers dislike noisy environments - the same could be said for other burrowing rodents.

However, age related hearing loss, called presbyacusis, is common among adults over the age of 65, leading to the gradual decline in hearing in both ears. Also, excessively loud noise or continual loud noise (above 85 dB) over an extended period, can damage the cochlear hair cells, leading to impaired hearing.

These are just a couple of reasons why many people have a less than a perfect hearing range.

Computer technologies involve using a lot of jargon that can be confusing to untrained and unfamiliar individuals. Abbreviations provide an alternative and easier way to familiarize and understand computer terminologies.

This article will list some of the most common computer abbreviations and their respective meanings:

Computer Acronyms


BIOS means Basic Input-Output Service or the information in the CMOS chip that contains the computer's firmware or core elements for integrating computer hardware and software.


CPU stands for Central Processing Unit. This is the main processor component of the computer and is considered as the brain of the computer. It is responsible for executing computer tasks such as running software applications, executing utility tasks, running games, and more.


DDR stands for Double Data Rate. This mostly applies to RAM sticks sending and receiving twice the amount of data per clock cycle, making it significantly faster than its predecessor, the SDRAM or the Synchronous Dynamic Random Access Memory.

There are several categories of DDR sticks, namely: DDR2 or Double Data Rate 2; DDR3 or Double Data Rate Type 3; and DDR4 or Double Data Rate Type 4.


DNS stands for Domain Name Server. This is the name that websites use to allow people to reach websites online. It facilitates easy searching of websites that does not require the input of the IP address.


DVI refers to Digital Video Interface that is one of the most commonly used digital interfaces for computers and video devices. It is the next generation to the previous standard VGA or Video Graphics Array but was superseded by the newer HDMI connection or High Definition Multimedia Interface.


FTP means File Transfer Protocol which is the protocol that allows sharing files, documents, and data over the internet or a computer network.


GPU stands for Graphics Processing Unit. It is responsible for producing visual content from the CPU to the monitor. It is sometimes sold as a separate card known as a graphics card. However, modern computer systems now have GPU installed on the CPU or the motherboard itself.


HDD means Hard Disk Drive. This refers to the mass storage device that is composed of various magnetic disks that store and retrieves data using mechanical components.


HDMI means High Definition Multimedia Interface. This is the next generation video connection for computers and video devices that superseded the DVI connection. A unique feature of the HDMI connection is its ability to not only project video but audio as well, making it a standard interface for most video devices.


HTML means Hypertext Markup Language which is the format used to transfer files and facilitate its movement over the internet.


HTTP or Hypertext Transfer Protocol is the main instruction set that facilitates the movement of files on the internet.


I/O means Input/Output which refers to the movement of data into and out of the computer and its media components.


IGP stands for Integrated Graphics Processor which is a subset of the GPU that is connected to the motherboard or the CPU. Computers with IGP do not require an external graphics card to display content to the monitor. However, gamers often choose to add dedicated graphics cards as these are more powerful than IGP.


IP stands for Internet Protocol which pertains to the rules of systems on the internet. The IP Address, moreover, refers to the digital address of websites on the internet


ISP means Internet Service Provider. This is the company that provides services that allow residential and commercial establishments to have access and connect to the internet through computers and mobile devices.


JPEG stands for Joint Photographic Experts Group. It is the common standard format for image files due to its compressed nature and smaller file size compared with other image files such as GIF, TIFF, and RAW.


LAN means Local Access Network which refers to the connection of various computers to a central server. LAN connections allow wireless sharing of files and wireless communication of various computers on the network.


LCD means Liquid Crystal Display which is the most common display technology used in modern computer monitors. This replaced the CRT monitors and provided a slimmer feature and better performance.


LED means Light Emitting Diode which is most popularly used as a light source. In LED monitors, light-emitting diodes are installed on the monitor instead of the standard CCFL or Cold Cathode Fluorescent Lamps to provide a better light source for the monitor and provide greater performance in terms of color accuracy and screen refresh rate.

MAC Address

MAC Address means Media Access Control Address which refers to the digital address of a device connected to a computer network. The computer's MAC Address is found in the computer's network card.


NIC stands for Network Interface Card which facilitates the connection of the computer to the network. NICs can either be pre-installed in a computer or sold as a separate device for older computers.


NTFS stands for Net Technology File System. This is Microsoft’s proprietary file system used in most Windows operating software from Windows NT to Windows 10.


NVMe SSD means Non-Volatile Memory Express Solid State Drive which refers to the type of SSD card that delivers the fastest read/write speeds among SSDs today. It connects to the PCIe bus or Peripheral Component Interconnect Express instead of the normal AHCI bus or Advanced Host Controller Interface which provides a significant boost in read/write speeds of theoretical speeds reaching 3GB/s.


NVRAM or Non-Volatile Random Access Memory refers to the type of RAM that retains data even when the electricity source has been cut off or interrupted. This is a special feature of RAM because most RAM is composed of volatile memory which wipes data when electricity is cut off.


OS refers to the Operating System. This is the computer's primary program and interface that

run the computer and facilitates user command. It runs automatically during computer startup


P2P means Peer-to-Peer which is a network infrastructure that allows two computers to share files and communicate without requiring a central server. This is an effective form of connection between two computers as each computer is a server and a node at the same time, enhancing the speed of communication and transaction.


PCIe stands for Peripheral Component Interconnect Express which is an expansion interface in motherboards that allows connecting video cards and storage devices. It is an enhanced version of the PCI and provides faster processing speed for SSDs and graphics cards.


PDF stands for Portable Document Format. It is one of the more popular document file formats because it is an open format that all operating systems and applications can read and process without encountering formatting issues.


PNG means Portable Network Graphics which is also another popular image file format along with JPEG. PNG processes image files through lossless compression which results in greater quality images at the expense of slightly larger file size.


PS/2 means Personal System/2 which is a layout categorization of media devices into two primary colors, green and purple, to designate their specific function on the computer. IBM designated green for the mouse port and purple for the keyboard port.


PSU stands for Power Supply Unit. This is the computer component that connects directly to the power source and distributes electricity throughout the entire computer and its various components.


RAM or Random Access Memory refers to the type of computer memory that stores application data and information at significantly faster speeds than mass storage devices. However, RAM is naturally volatile which means data is wiped clean through every computer shutdown.


SATA means Serial Advanced Technology Attachment. It is the primary connection interface used to connect mass storage devices to the computer. The SATA interface is widely popular for the HDD but it is also available for SSD. However, the read/write speeds of SATA bottlenecks the potential speeds that SSDs can provide.


SSD stands for Solid State Drives. This refers to the new generation of mass storage devices after the HDD. It provides significantly faster read/write speeds than HDD and has several available interfaces to facilitate even faster speeds. Currently, the NVMe SSD connected through PCIe provides the fastest speeds among all types of SSDs.


UPS stands for Uninterrupted Power Supply. This is a device that functions as a power backup to the computers when the main electricity supply is cut off.


URL stands for Uniform Resource Locator. It is the digital address of a website on the internet more commonly known as web address.


USB stands for Universal Serial Bus. This is the most popular communication protocol between the computer and external devices. It also transfers data between the computer and external storage devices as well as facilitates communication between the computer and peripheral devices.


VGA means Video Graphics Array. It is the previous video connection standard between the computer and the monitor. It is considered as an obsolete connection protocol today as it only provides an analog interface compared with the digital interface common among DVI and HDMI connections.


VPN means Virtual Private Network which is a secure connection and interface between a user and a private network. It involves the extension of a private network to a public network, bearing similar security and functionality of interacting within a private network.


VRAM refers to Video RAM or Video Random Access Memory which is a type of RAM that is made specifically to store image data and files. It functions as the buffer between the CPU and the monitor to allow regulation of frames transmitted from the computer to the monitor.

Rocky MTN Ruby covers Computer Hardware, Components, Peripherals, Coding Languages, Gaming, and so much more.
Copyright © Rocky MTN Ruby 2021