Equalization when mixing and mixing a track
Let’s take a look at the basic rules for mixing a song.
Familiarizing yourself with the mix.
Before you get carried away with mixing, do the following on your virtual or real console:
1 Put all faders in the middle positions (not 0 dB, just the middle positions!).
2 If there are instruments that are hard to hear – turn up their volume, and turn down the loud instruments. Although this should not be the case in your digital studio (see levels in mixing below). The main task at that stage is to make all tracks play at approximately the same volume.
3 Panorama can be messy at this stage, only Bass, Drum, Work and Vocals – leave them strictly in the center!
4 Turn off all equalization and send to effects
5 Look through the contents of each track, remove long spots where the instrument is silent (short ones too) – a good way to get rid of noise or other artifacts that can drive you to the brink of finding their source when you listen to all the tracks together. Sometimes this operation is fraught with the occasional deletion of quiet and/or short audio clips – be careful.
Find instrumental pairs (triplets) – who plays with whom, and who complements whom and depends on whom – and start such a review with the drums. Play around with their panning and volume, it’s okay if the overall volume of the track has to be added. Listen to two, three, four tracks, turning off the rest.
Watch for instruments that have the potential for frequency clash and mark them in memory, or better yet, on paper.
Mark areas of dynamic decrease and increase, if any, consistent with the dramaturgy of what you have in mind.
Determine the direction of the song, its pattern, and think through how to build that pattern over the course of the piece. Sometimes this process is like a performer adding emotion, character to dead notes when playing “from the sheet.”
Signs of an amateur mix:
- Relatively noisy and “beatless.”
- Sounds distant and blurred in the stereo field – the result of mindless use of reverb and effects
- Instrument levels “float”
- Inept control of instrument panorama
- Phase negligence problems
- Abuse of effects
- Presence of artifacts in the voice (“s”, “ps”)
- Distortion (including digital) associated with unskillful compression
- Inconsistent levels in the whole mix or no contrast between parts of the composition
- Poor selection of instrumental tones or sounds that don’t match the style of the song
- Bad MIDI programming
- Bad arrangement
6 It’s time to turn up the volume levels on the “dry” tracks and adjust the balance and panorama of each track as you see fit.
7 You can take a break and listen to similar projects from other (already finished on CD, commercial discs). Listen to them through your monitors and prepare your ears for comparison. This step is called “tuning your ears.” Remember that the future mix depends not only on your knowledge, but also on the state of your health and well-being and the ability of your brain to respond adequately to external stimuli.
8 Now revisit the mix again from a new perspective. It can either cheer you up with the thought of how great everything lays and work with this song for some half an hour. But, the mix can frustrate you with the anticipation of difficulty in solving technical, or even purely musical problems. This is a normal working situation.
Before panning the instruments, check the mix for mono compatibility. Check it again, after adding any effects.
Think of the mix in three dimensions: height, depth and width. This means that the entire frequency range should be represented in the song, showing the depth of the sounds and the instruments spaced out. In turn, the frequency range depends on knowing the frequency range of each instrument and correctly identifying the frequency groups it needs and hiding the unnecessary ones. Sound depth is achieved first of all by reverb and delays, and secondly by using chorus and flanger. Purity and clarity are the goals we strive for. Panning is an indispensable technique in achieving them.
The most important elements of the mix.
Balance – the volume ratios between the participants of the mix.
Frequency range – the competent representation of the frequencies needed by the human ear.
Panorama – the arrangement of instruments in the stereo field.
Depth, volume – the presence of a sense of space in the mix.
Dynamics – the overall volume envelope of the mix.
Originality – the totality and combination of all these elements, plus musical originality and the presence of impressive effects.
This is all that a good mix requires.
Levels in the mix.
When you record your tracks, it is considered optimal when any member is initially recorded at maximum volume, i.e. at 0 dBfs. In terms of digital representation, levels above -6 dB completely fill the bit grid. Certainly, some will have to be made quieter when mixing, some will require higher volume. But, all instruments will be repelled by the “foundation” – the bass and bass drum. The experience of world “mixology” shows that their level in the mix goes around -5-6 dB. When you add them up, digitally their peaks will go up to -3 dB. Further mixing will require reducing their level (in peaks) to -6 or -7 dB. Build your mix based on the latter values. In other words, if the foundation is “standing” at -7 dB, then the rest of the instruments should be output at a relative volume to the bass and bass drum.
Other than that, remember, changing the signal amplitude by:
- 3dB – an increase in signal power (NOT volume!) by a factor of two;
- 6dB – doubling the voltage;
- 10dB- increase in signal power by a factor of 10, and sound volume by a factor of 2;
- 20dB – 10 times increase in voltage, 100 times increase in power, and four times increase in volume;
If it is known about any device that its gain is 0dB, it means that the output signal is exactly equal to the input signal. And nothing more!
It is recommended to divide the participants of the mix by relative volume levels. The most rational approach assumes 6 degrees of audible volume. This is usually sufficient for mixing a modern popular song. The term “audible volume” does not imply clear gradation on a scale measured in decibels – we are talking about the volume actually audible to the human ear, all this on the one hand correlates directly with the Fletcher-Menson table equal volume curves, on the other hand, the presence of certain harmonics in the main sound can make it quite loud, even if its average level is the same as the others.
As you understand, levels are relative. However, 6 steps in the volume of the mix is a necessary and sufficient amount in its construction.
In principle, everything is simple, we take each track and match it with other instruments.
But order is important in this case:
According to other recommendations of leading audio engineers, the voice comes after the drums and bass, according to others, it comes penultimate, before the backing vocals, but I left it third, and I don’t think that’s a mistake. This variation has to do with the style of music you have to mix. Rock, blues compositions are preferable to mix in the order given. In dance songs, you can start processing the vocals after the percussion.
- Main supporting instruments (keyboards or rhythm guitar)
- Leading instruments (solos, embellishments, leads)
- Auxiliary sounds or instruments (brass tutti, effects, etc.)
- Backing vocals
Again, check where the instruments are silent and trim anything unnecessary from the track. Samplitude Pro has a very handy feature regarding session storage. Working in its virtual project (VIP), you need to trim all the “silent spots” in the tracks. Later, the program can save the entire project with the “trimmings” taken into account, which will significantly reduce the disk space occupied by this VIP.
Let’s start balancing the mix.
First of all, it should be noted that the “stacking” of instruments directly depends on the conceived arrangement. The arranger should know that each instrument should have its own musical range. If the arranger plans to play the parts of two instruments playing in the same musical range, these instruments may start fighting among themselves for the right to possess the listener’s attention, then in such a situation the arrangement must be reconsidered. And if the arranger believes himself to be the greatest composer in the world, and it is useless to argue with him, then the conflicting instruments have to be “reconciled.
Let’s have a little “literacy” regarding the armament of the arranger in terms of musical elements.
The basis is the rhythm, the rhythmic pattern of the song, led by the drums. This also includes the rhythm guitar or keyboards playing the rhythm track. Of course, the bass is an active member of this group, excluding songs where it rules “by itself,” such as a jazz trio.
The pad is the cushion of the mix, I don’t see the point in introducing it.
Rhythm – can be melodic or percussive. Unlike the main rhythmic pattern, this rhythm usually goes against it, pulsating, more often triplet or syncopated. Percussion, rhythm guitar, arpeggiated synth lines can play this role. They all bring dynamics to the mix, diversifying and enlivening the song.
The soloist can be the singer (singer), or a solo instrument.
Fill-ins, embellishments, and stacks – fill in the space between the soloing performance, opposing this line.
Read more about each link in the mix.
The drum sound must match the character of the song.
- drum sounds should be uniform, consistent with each other, especially the snare and snare drum
- the drums in pop music should be as real, dense and hard as possible
- the timing of the drums should be “free”, but not asynchronous or arrhythmic
- leading the rhythm they should be powerful, but not “stick out” of the mix
- they must not be monotonous throughout the song and have a variety in dynamics and patterns
panning is obligatory, there are two ways of its representation: 1) from the point of view of audience 2) from the ears of the drummer (how he hears the kit while sitting behind it).
But the latter option is used less and less often, because the drums seem too scattered in the panorama.
The worker and the bass drum should be in the center, exceptions can be only for jazz projects. Het – 10 to 50% to the right or left (left- or right-handed drummer)
the two top toms (Hi and Mid) are scattered somewhat to the sides of the pan, the floor volume is more to the right or left.
you don’t want to spread the toms widely around the panorama, it’s distracting
you need to have at least two different plates scattered on the panorama by 50-75%
if a “room volume” recording has been made (overheads), check them for mono compatibility, sometimes it is useful to narrow them down by the width of the stereo field
real drums are very problematic for creating the right sound – it means that you should always correct their frequency spectrum (cut the sub and raise 3-4 kHz, for more details see “Frequency Profile” of drums) and correct dynamic envelope of the kick with compression. Equalization is better to use after compression, but check the opposite
You can add a slight short reverb to a synthetic bass drum
If you have time to experiment to find a new sound, add a short (1-5 ms) delay to the bass drum. This can lead to a kind of interference of the bass and flanging of the higher frequencies. If you add positive (or negative) feedback, this effect may even be unpredictable, though, test it.
The snare drum is also capricious. Traditional workman’s equalization assumes a lowering in the midrange, try adding aggressiveness to it by simply adding lows and tops, leaving the midrange alone. (*For reverb, see the relevant section – nothing complicated here.)
If you’re working with live drums – don’t give a lot of toms to keep the toms from messing with the brass. It’s better to trim the bass and raise the middle a bit.
Note that copper responds better to “taking away” equalization, so don’t add tops when reaching a light sound – it’s better to clean up the bass and some parts of the midrange. Copper should not sound aggressive. If the copper sound seems short, add reverb, but this looks good for sampled drums, but for live ones, such a technique can blur the rest of the instruments that hit the mics.
Don’t lose the detail of the drums (we’re talking about drums only here, not percussion) – every sound should be clearly audible, even in quiet songs and have a clear place in the panorama, not smeared over it.
The reverb on the kick drum and snare drum is sometimes useful when ONE of them sounds in mono (creates a changing closed/open stereo effect).
Gate on the toms can change the stereo picture, control this artifact
The bass and the bass drum should be one and the same, which is a requirement of most popular songs. They have to “groove” the song.
The bass track is usually monophonic and sits exactly in the center.
Reverb is contraindicated for bass, but sometimes it is worth giving an imperceptible and short volume
In ballad-type songs, chorus can help create the “romance” of the mix
Too much loud low frequencies can “eat up” dynamic space, not allowing other instruments to “grow” at the mastering stage
Like a big drum, many – if not all – bass tracks can be helped by CUTING low frequencies and NOT boosting them. In the context of an entire mix, overweighting the upper harmonics, caused by, reducing the lows, will help make the bass sound more powerful, not less powerful, which would seem to be expected
Only the bass and bass drum should dominate the downbeat, everyone else should be cleaned out of a certain proportion of the low-frequency component.
If there are two basses in the song (synthesized and a live one), you should give preference in low frequencies to one of them, and watch their interference in other frequency ranges, cutting out unnecessary things from one of them
live bass guitar always needs compression, if your bass track is sequenced then it should be compressed to form dynamic envelope. You can condense the bass by setting a relatively long attack and a short recovery time, but not so short that the compressor does not start tracking the waveform of the bass and it did not lead to pamping in the sound of the bass.
In live playing, the bass will always be out of sync with the bass drum. There is no point in replaying the part, it is better to pass the bass through the gate, where the side-chain is powered by the bass drum signal. Now the barrel will control the timing, making the gate open and make the bass sound at the moment of kick. This process is controlled by the gain reduction level in the gate
the bass drum in tandem with the bass (for the fusion methods see other sections) must complement each other and now form a new instrument in the mix
live drums cleaning with gates should be done at the recording stage and not in the mixing process
There are songs with and without pads. It is better, of course, to have pads in all the right places, because they fix the mix, give it some body, and can make up for mistakes of other instruments.
Pad, in fact, is a chord sequence played in the mid- and low-frequency range. Rhythm guitar can also play the role of the pad, especially recorded by double-tracking. The pad is the basic musical tone of the song, the basis for other instruments to get along with it. For an allegorical comparison, imagine walking barefoot on a fluffy Persian rug, and then someone cleans up the rug… that’s about how a song devoid of pads should sound.
The pad should be transparent, not rough and, in some places, not audible, but felt in the mix in the background of the other instruments. The pad is a structural support for the song, and it should not be annoying and monotonous.
Check the sound of the pad on headphones. Many keyboard pads sound too smeared out on the panorama, so it makes sense to narrow them down so that the sound becomes “more centered”.
For double-tracked pads, you should also bring the channels a little closer together, and thus give it a little more natural chorus.
But if you want to leave two independent guitars completely right and left, but also get the middle from them, try this: send the right guitar to a simple and uncomplicated chorus, take back only the “wet”, processed signal and throw it to the left by 50%. Do the same with the left-hand guitar, only take the chorus output from the right-hand side at the same 50%. Adjust the volume of the resulting midrange to your liking.
If the pad is recorded in mono, run it through a stereo chorus.
If you have two different (or one-dimensional) timbres (instruments) participating in the pad, try, instead of spreading them out over the channels, to shift them to the center and run them through a good stereo chorus.
That way, the pad should be wide, but not so wide as to leave a hole in the middle of the panorama. Chorus is a great tool in pad processing, but note that chorus shouldn’t be throwing itself around the panorama, much less haphazardly. Try to set a small delay and low modulation speed in chorus and play with these parameters. By the way, some instruments, like piano, for example, are not subject to chorus, but if you want to get an upset piano from an American saloon, give it a chorus.
By tweaking the pad at 700-800 Hz, almost completely removing the bass (down to ~250 Hz), and using smooth curves in the parametric EQ, you can get the desired transparent sound. The high frequencies of the pad should be handled carefully so as not to flood the mix with them.
Now you’ve got drums, bass, (voice) and pad – it should all sound coherent.
When you’ve checked the pad, you’re at the mixing checkpoint. This is where you need to take a break and have some tea. This is an important place in the mixing process. This version of mixing should be saved, because when you go on and add everything else, mistakes might occur, then you’ll know where you can go back and revise your actions after that.
For PC-based workstations, it’s worth doing a separate pre-mix, dumping the tracks you’ve prepared there – this saves machine power and allows you to focus on adding other instruments.
Adding the main parts.
The Role of Equalization in Mixing.
Equalization is the foundation of recording and mixing. It’s the “black magic” of music and many sound engineers don’t like to give away their trade secrets.
To confuse people even more, some “educators” present many equalization issues as dogmatic rules that students can break at their own risk. One such rule is “get the sound as it is at the source”-don’t take it at face value. Experimentation is the basis for success. But on the other hand, excessive equalization is also harmful.
What equalization can do:
- Eliminate noise.
- Mute interfering harmonics
- Enrich the sound
- Regulate the amount of “meat” and “tapping
- Distribute instruments (closer or farther to the listener)
- Create volume in the mix
- Equalize the level of the mix (using track automation)
- Standard techniques in frequency tuning of tracks.
The sequence of actions when correcting frequencies on the track.
Put your equalizer in the “zero” position.
Listen to the track. Don’t touch the EQ band controls until you’ve evaluated the sound – after all, it’s possible that this instrument may not need frequency correction. There are three key undesirable tones in the sound: a “dirty” sound, a nasal and overdriven midrange sound, and a too ringing, high-pitched sound. This is the main thing to pay attention to first.
Working with “dirt” (100-800Hz): Check each instrument for potential dirt content. Bass drums almost always have dirt, unless it’s a rap or fancy hip-hop drum. Others in this series are toms, bass guitar, piano, acoustic guitar, and harp. Dirt is usually centered around 300 Hz. If you remove too much of it, the instrument will sound thin, because this “dirt” also contributes to the frequency “body” of most sounds. When cutting out the “dirty” frequencies, always make sure that you don’t lose the low-frequency base of the instrument. If you do have to do this, you can compensate for the loss by boosting at 40-60 Hz. When getting rid of dirty frequencies, set Q as narrow as possible, because if you use a wide range of frequencies, the danger of capturing a fraction of the “base” of the sound increases.
Eliminate annoying noises: harsh or humming, nasal noises are all found from 1000-5000 Hz. Vocals, electric guitar, cymbals and hi-hat often require attention in this area. Depending on the type of music, the snare drum is sometimes worth an extra frequency revision. The best way to detect an annoying frequency is to raise its volume. If your ears are not enjoying it, you are on the right track and the middle of the instrument clearly needs correction. Never raise or cut too much in this area, make sure you don’t get a dull sound. If you cut involuntarily, you can compensate for the lost timbre by boosting between 5 and 8 kHz. When working with annoying zones, again, stop Q here as narrow as possible for the same reason stated above. Although you cannot touch the base of the sound, if you use a wide Q, there is a danger of losing the “body” of the sound in the midrange.
The high-frequency edge (5000-8000Hz): The boost here depends largely on the style of music. R&B, dance music, and rock and roll require more brightness than other styles. When boosting high frequencies, set Q to midrange. It’s more natural for that range. If you’re still in doubt about what width of Q to set, start with the narrowest one and try to widen it, controlling by ear.
Set the EQ to passive again. Listen before and after your work.
Applying EQ to noise reduction.
Noise is everywhere, especially on the microphone path. In order to cut unnecessary low frequencies we apply Hi-pass filter cutting everything down to 60-100 Hz, except for bass guitar from combo box. Remove noises from the high-frequency component – the result of influencing radio interference on guitars and keyboard instruments by filtering the tops. Here’s how it works: when an instrument (keyboards) sounds, select the Low-pass filter on the paragraphic equalizer and pull it down from 20 kHz until you hear it affect the sound. After that, pull back to the tops, stopping with a small, safe margin from the point of impact on the sound. This is the easiest and most effective way.
The same maneuver can be used when working with potential sources, which are both bass drum and low-end transducers of electric guitars, bass guitars, and even electric pianos. Strangely enough, but believe me – they have nothing useful above 8 kHz, there is only noise. So, all of this can be safely removed.
If that does not help, use the Dolby or internal noise suppressors from Adobe Audition.
The last hope looks like this: run the sound through the highpass (and lowpass if possible) filters, virtually “killing” the sound, and run it through the Aphex Aural Exciter. This device will add trimmed tops by generating them from the main, stored sound. This technique has proven itself quite well and has been at the heart of the restoration of many rare recordings. True, many engineers prefer to keep silent about the fact of its application.
Equalizer as a harmonic damper.
Anyone who has experienced the recording of live drums knows that it is not an easy thing to do. One of the problems in this process is getting a good working drum sound. The thing is that in its sound, or rather in its resonance, there are frequency components, which give an unpleasant specific coloring, it is difficult to write about it, this phonk must be heard. These harmonics can and should be extinguished by subtle, literally, equalization with a narrow Q. Of course, you need to have good experience and “hearing” to recognize the place where these annoying harmonics accumulate.
Here comes the flip side of the coin, which recording “purists” neglect to pay attention to. When recording with microphones that are mounted a few centimeters away from the sound source, these microphones tend to have cardioid sensitivity. But in addition to this, there is the so-called proximity effect, these microphones are sensitive to low frequencies at a short distance. And they pick up quite a lot of low frequencies. So, cranking up the bass is a must.
Normally, every musical sound is considered to have harmonics from the basic tone: sub-low, high and higher. All of these are allocated to their own range shelves. The whole idea of “enrichment” is to raise or lower these components without affecting the fundamental tone. We have three frequency bands and if you:
- Raise or cut the subharmonics, it makes the sound warmer or cooler
- Raise or cut the upper harmonics – it makes the sound louder or quieter without affecting the basic dynamics of the sound.
- Raising or cutting the higher harmonics makes the sound more or less bright.
Manipulating the harmonics rarely results in the instrument overextending itself in the lows or highs. This is because these harmonics all lie well above or below the base tone, even considering the full range in which the instrument is playing. Of course, if the frequency change is on the boundary of the main tone, the effect is more noticeable, but outside this boundary, you can turn the EQ all the way up and down to achieve the desired timbre or harmonization with other instruments. Note that in most cases, the narrower the Q, the less likely it is to make clearly audible changes in timbre, and this is especially useful for matching the participants in the mix.
“Meat” and “tapping” (or “gooping” and “tapping”).
It’s hard to think of another name for these mix characteristics. Although, by and large, they all come from the previous topic, where we touched a single sound of an instrument. But a mix consists of many components. And each component contributes to the formation of unnecessary to us and our ears artifacts.
“Meat” does not refer to the prevalence of bass in the mix. It is a category of that dense, clean and heavy sound that we feel at discos not with our ears, but with our chests and stomachs. This “meat”, first of all, is a child of bass and drums work, but also guitar, keyboards and others, who have bass in their spectrum, contribute their part to the total sum. But this “meat” needs to be managed.
How to manage the bass.
Bass and sub-bass frequencies from 40Hz up to 250Hz make up the low-frequency “range” of your mix. The search for the optimal amount of bass in the instrument (however, this applies to the mix as a whole) can go by the following algorithm.
- set the Paragraph EQ band point to 8 10 dB using a narrow, semi-octave Q .
- sweep through the frequencies in the bass range until you find the frequency where the sound has the desired amount of density.
- adjust the amount of boost of this frequency to your taste, check what gain will make the sound muddy.
- add a moderate amount of frequency equal to half the frequency found, and if that frequency is too low (below 40 Hz), double the frequency. (Sample: your frequency was 120Hz, add a couple of decibels at 60 Hz, if your frequency was 50 Hz, add some at 100 Hz)
If you listen to any dance composition, the pattern schematically looks like “tum-tum, tum-tum.” “Tum” is the work of the bass drum (and bass). “There” is the weak lobe, which is the working drum plus its helpers who help at syncopation points. Judge for yourself, it’s these “tum’ahs” that make our feet dance, they’re the rhythmic backbone of the song. Unlike the “tum’ah,” the “there” hits us a little higher – in the chest, as ridiculous as that is, it really is.
The tapping in the mix is the brass, the high-pitched percussion (percussion), but it’s also the vocals and their reverb. The upper frequencies in the mix are the litmus test of a quality mix. The steel strings of the guitars also contribute to the construction of the upper floors of the mix. These upper floors require close attention, as there are frequencies that can be intrusive to our ears and our ears quickly tire. This is especially important when you’re working on a mix rather than listening to it as a resting person – you can get tired even faster. So, monitor this not only by ear, but also with a spectroanalyzer.
Of course, you can overdo that “holy” trio and ruin your main idea of the song and its emotional content with them. You need measure everywhere. However, everyone’s sense of proportion is different…
Placement by the depth of the mix.
The theme goes into the process of fine frequency adjustment.
The standard way to create the “place” of an instrument in the mix is to apply some kind of reverb. But the result doesn’t always achieve what is expected. Reverb does not accurately place the player in the right place in the space. For an application of equalization in creating. “proximity effect” see the section on vocals.
Here you need to remember that the most important secret of mixing is to play “light and shadows” – contrasting the part of one instrument compared to another. All you need to do is to use a precise equalization of the bass and treble in plus or minus, add a short reverb as a tool of additional positioning – and you can make the mix big, rather than forcing each member of the mix to turn up the volume.
Creating a space effect in the mix.
This idea of creating space in the mix is based on the following example. If you have an instrument that is not clearly audible in the mix for some reason, and highlighting it with volume doesn’t make it sound good, then you look for a group of instruments that shade it out with their frequency range. In that group, you find a frequency hole, which you trim. The result is that everyone’s volume is virtually unchanged, and our instrument you’re looking for becomes more apparent. You can apply the equalizer to a subgroup, or you can do it to each instrument individually (if you have a stock stock). Notice that subtractive equalization is applied. This trick can be successfully implemented to create space for vocals, which is what many people are doing today.
Spectral mixing is the basic principle by which you will be able to get clarity and order in the mix. Its essence is simple – give each sound or instrument its own volume in the spectrum of sound frequencies. I will repeat in a picture that is probably familiar to everyone, when an acoustic guitar “muddies the waters” in the mid-low range. Cut off a chunk of its low-frequency spectrum, the guitar will not suffer in its sound, and the composition will become much cleaner.
Automatic mix equalization.
This is a really great trick. Everyone has faced this problem when a “popping” section appears in the mix. The reason for this can be an unfortunate, but necessary for the ear, combination of timbres. Such areas in the song can be repeated several times and are similar in their frequency nature. You can find the culprits in the tracks and “nail” them with volume, but who knows how quickly this can be dealt with throughout the song? Try experimenting with EQ at this point by programming it to automate.
What was disco mix?
Disco is a genre of music that emerged in the early 1970s. It is characterized by a heavy beat and often features synthesizers and other electronic instruments. Disco mix is a type of disco music that incorporates elements of other genres, such as house music or techno. It is often more upbeat andfast-paced than traditional disco music. Disco mix became popular in the 1990s, and its popularity has fluctuated since then. Some notable disco mix songs include “No Limit” by 2 Unlimited, “Everybody Everybody” by Black Box, and “Freedom” by SNAP!.
How do you mix old songs?
There are a few ways to mix old songs. One way is to find the original song and remix it. Another way is to take an old song and create a new version of it. Finally, you can use samples from old songs to create a new song.
What DJs use to mix their music?
Some DJs use vinyl turntables, which allow them to physically manipulate the playback of a record by touching it and moving it back and forth.
Other DJs use software like Traktor or Serato to mix their music. These programs allow you to control the playback of digital music files using a computer interface.
What songs are good to mix together?
It depends on what you’re looking for. If you want to mix together two songs that have the same tempo, then you’ll want to find two songs with similar beats per minute (BPM). If you’re looking to create a mash-up of two different songs, then it’s best to look for two songs with different tempos.
When mixing together two songs, it’s important to match the volume levels of both tracks so that they’re both playing at the same volume. You can do this by adjusting the fader or mixer knob on your audio device. You may also want to adjust the EQ settings so that both tracks sound good when played together.
Finally, make sure to keep in mind the song structure.