top of page

Search

2373 items found for ""

Other Pages (2357)

  • Contact Form | Blackett Music

    Contact Us Questions? Comments? Reviews? Business Inquiries? We would love to hear from you! Please fill out the form below, use the Site Chat, call us at ​(206) 636-6238 Your Name: Your Email Address: What's on your mind? Submit Name, Email, and Comment are all required Thank You! Your comment has been submitted

  • Connecting Artists and Fans

    Anchor 1 Members Listen Read View Playlists Welcome! Within these pages you can find art, music, playlists, books, fun, and much more! Discover new artists and explore their work using the menu bar above. While you are here, please consider s upporting your host Ben Blackett whose music is share d below. View an auto-updating list of artists and their content Current Stats: Bandcamp Wavlake Youtube Soundcloud The current music streaming business models are broken and do not help independent artists. The four sites listed above offer a slightly better choice. Bandcamp for purchasing, Wavlake for "pay what you want", Youtube for discovery, and Soundcloud for monetization. Please visit these sites and support independent artists like myself. Why Subscribe? Did you know that subscriptions are one of the very best ways to support your favorite artists? When you subscribe to BlackettMusic.com, you are not only helping me maintain and grow this website, but you are also supporting every artist with a profile here as well. Your subscription truly helps! Refer a Friend for a Discount | Earn Discounts with your Activity Buy a Gift Card! $5 Essential Access Subscribe Create your own artist Profile(s) here! Add unlimited content per Profile Early Release access to new music from Ben Blackett $12 Supporter Subscribe Everything above plus: A $10 Gift Card for any Store purchase ​50% off Ben Blackett's entire Discography as digital downloads $32 VIP Subscribe Everything above plus: 20% off all future store purchases Access to Works In Progress with opportunities to provide feedback Social Media Shoutouts & Mentions where appropriate $75 Presidential Subscribe Everything above plus: Access to remix 1 or more songs (stems supplied) Contract to be determined prior to access being granted Join our email list! Enter your email here Sign Up Thanks for submitting!

  • Connecting Artists and Fans

    Anchor 1 Members Listen Read View Playlists Content is sorted by Last Update. Make a change to appear first. There are no items available to display here Join our email list! Enter your email here Sign Up Thanks for submitting!

View All

Blog Posts (16)

  • Terminator times... I`ll be back!

    Do you remember the old Terminator movie? Are we living in Terminator times in the music industry? Nowadays there's AI this and AI that... Even AI based mastering software. Bullsh*t... Music is being made by humans for humans, no algorithm will replace that. Music carries emotions and creates emotions. No computer algorithm will replace that or even come close to understanding what emotions to emphasise in the song to make it work better. No AI based mastering software will come even close to bringing out the feel of the song and creating it. Period. Some popular AI based automated mastering platform shouts on its website: "There’s nothing quite like hearing your song polished and ready for release, and thanks to AI, this can happen in an instant. Using machine learning, genre-specific mastering options ensure your audio files always sound best-in-class, with no need for expensive mastering studio rates or complex audio processing software. Say hello to the future of music mastering." I call it bullsh*t... Here`s why... Mastering for example a very complex operation is probably the most human aspect in audio engineering (apart from production and writing the song itself of course). Mastering is not just slapping a few limiters, pushing the track and calling it a day. Mastering apart from emphasising micro and macro dynamics, preventing the tonal balance of the mix is about psychoacoustic. The way we hear and more importantly feel the song. It's about listening. Mastering is about 70% listening, listening and listening again. Your job is to work on the feel finding little magic bits and bringing them out to work for the song, not against. Loudness always comes second. With AI based software the algorithm picks LUFS target, analise the EQ spectrum and tweaks both to achieve some sort of a match with specific template, Mastering is hard not because it's an obscure part of music production, it's hard because of the level of concentration and attention to details required when listening to the material. Mastering is mostly listening, tweaking hardware gear or plugins is only a small part of it, same as focusing only on the frequency balance and LUFS metering (which can be very very misleading with judging the loudness but that`s another story) Mixing and mastering is not only about technical aspects (although it's still important). It's about the vision and feel of the song. Automated mastering uses AI to simulate the decisions made by a mastering engineer. It’s a computer making its best guess about what to do with your music... yes... best guess... How good can it be? Another thing I would like to mention is that mixing doesn't work on compartments. I`m afraid that all AI based software works like that. Your low end will be right only if your mids and high frequencies will be right. Frequency mirror effect is a real thing, not many realise that working on high end actually has an impact on your low end too. Looking mindlessly at LUFS and/or RMS meter without having your mix well balanced frequency wise and with a good crest factor without huge transient spikes is pointless and useless. LUFS (even integrated) is NOT ultimate loudness guidance. So that proves that all algorithms aiming for specific LUFS while doing automated mastering are not a good idea. Don't get me wrong, this software can be useful for preparing a quick rough mix or preparing a quick reference master. Nothing more in my humble opinion. So... A TRADITIONAL MASTERING ENGINEER WILL ALWAYS BE THE ULTIMATE OPTION. Lucas/LUMIC Studio

  • Sing for me!

    What elements do we focus on while listening to music? Well... It really depends who you ask, an audio engineer would probably say 'all of them' but the average listener would say something a bit different though... And we all make music for people to enjoy, all people, not only audio nerds (sorry guys, just kidding... ) So what does the average listener, the general audience, focus on the most while listening to music? The answer is simple... vocals and drums. I`m not sure if you`re aware of 'focus points' theory in audio engineering. Basically it says that throughout the entire song there should be focus points that grab listener attention, the elements that stand out at the particular moment of the song. Of course they change as the song develops but mainly they are built by vocals and drums (lead instruments take over when there is no vocals at the moment). That`s why it`s so important to focus on those 2 elements while mixing. If you analise majority of songs, you`ll notice that other supporting elements like guitars, synths, pianos vary from song to song. I`ll focus on guitars as I`m a guitarist (but that applies to other instruments as well)... Let`s take guitars. if you`ll take 10 songs you`ll have 10 differently sounding guitars (even in the same genre), and if they work for the song and more importantly work nicely with the bass guitar then the job is done. When listening to the song we hear that the guitars sound good but our main attention is focused more on both vocals and drums. In this blog post I would like to focus a bit on vocals (it`s a part 1 of well... thousands of future posts about vocals). Us humans, we`re so used to the natural sounding human voice as we hear it on a daily basis. When EQig vocals we should pay attention to the range between 1k and 2.5k Hz as this region is where the frequencies which are sensitive to your natural vocal perceptions live. Messing up them usually leads to unnatural sounding results and our ears will pick it very quickly from the reason stated above. Well… there’s a few things you can do, but let’s not talk about the magic plugin chain or the next cool blinky light to make the vocals sound better. Let’s talk about a great EQ process to get your vocals to clear up and sound amazing in the mix while still sitting nicely in the mix. First thing (after comping, tuning (if necessary), gain staging entire performance, deesing is removing any ringing frequencies. Considering proximity effect and comb filtering, there are usually some ringing frequencies in low mids (I`m not mentioning HPF as this depends on the vocals, song and the way they are recorded), then the next region where ringing frequencies appear is the region around 2,5k and little above. Often vocals sound bad around 700Hz, so that`s the next place to look at. I always cut the very top on vocals, but I do that at the very start. There's nothing nice above 10k (12k tops) - anything up there is just noise. Just be aware of the slope of the filter - a LPF at 10k could be affecting frequencies down to 5k or lower, depending on the slope (the "corner frequency" i.e. 10k of a filter is the point at which it is attenuating the signal by 3dB). The point about those "air" EQs is that they have very broad shapes. The 40k band on the Maag EQ for example affects frequencies down to 3-4k, it just has a very gentle slope, so it's barely noticeable and there's a subtle lift in the audible top end. It's all about the shape of the EQ curve. I will especially cut the high-end on vocals that have been recorded with cheaper microphones because the sound of those high frequencies is usually really nasty! So, LPF vs. air boost.... I might use both to sculpt the tonal balance of the vocal to where it needs to be. Of course numbers mean nothing as every single voice is different but those regions are quite problematic quite often and slightly vary from recording to recording, Also please remember that vocals like dynamic EQ to process the track only when necessary. Let`s go back to tuning... Use it with caution as every pitch tuning brings artefacts and remember not to tune sibilants (separate them from the core part of a particular phrase as this is the first sight of amateur tuning. Remember that autotune was invented in 1997, before that there were countless epic songs where singers could actually sing without depending on technology... And still creating timeless anthems. Just depending on their talent and voice. With all parts being in tune remember this... In the studio we do have a button for pitch correction, we also have a timing button... But we do not have an emotions button. And if the vocal recording is lacking that... there is nothing we can do. Sorry. All good but how do I get my vocals to stand out? Often, we lose the vocals because we are masking the primary frequency. When I say the primary frequency of the voice, I mean the frequency that when you look at an EQ analysis is the high spike. So… let’s say you bring up your vocal track and look at the frequencies with SPAN or your EQ or whatever, and you see that the most pronounced spike is at 1.7 kHz . You want this vocal to stand out…so do this… boost 1 to 2 dB at 1.7 kHz but boost 3 dB at 850 Hz as well. The second frequency is 1/2 of the primary…which will be an octave below. Here’s what will happen. When we hear 2 notes together, in a chord on a guitar for example, we naturally hear the higher note more than we hear the lower note. What we don’t realize though, is that the lower note if actually giving our ear a frame of referenced to interpret the higher note with. The presence of the lower note helps us hear the higher note better. By taking ½ of the primary frequency of the voice and boosting it a touch more than you did the other, you are using the exact same principle. In another post I'll focus on how to make the vocals stand out in mastering and the mix itself by using sidechain and other techniques. Saturate the magic... Saturation very often is the secret weapon when it comes to vocals (either used subtly or for sound designing). I'll generally always put saturation straight on vocals to some extent and have that as the "raw" sound - so the saturated version would go out to parallel channels if they exist. That just comes from my purely analog days where a recorded vocal always came through at least one outboard compressor (sometimes 2), a pre-amp, the analog console, then on to tape - that's a lot of different types and levels of saturation from the gear (not to mention then coming back off multitrack tape, through the console, and out on to stereo tape at the mix!). It's the vocal sound we`re used to, along with most of the general public. Parallel channels then provide support if necessary and the difference between styles and songs is really just a difference in the amount and type of saturation/distortion/compression applied. Generally though, denser mixes would call for a more focused vocal sound, so that means keeping it really controlled - so more compression/saturation directly on the vocal. More open mixes would favour a more parallel approach, so you're keeping all the dynamics of the vocal but using the parallel tracks to add depth/clarity in a more consistent way. Having different processing between sections is definitely a good option. I tend to split the audio on to separate channels though, rather than automate anything, when the changes are for whole sections (so I have a "verse vocal" channel and a "chorus vocal" channel, or whatever). We can write entire books about vocal processing, this was part 1 with some general thoughts There`s still so much to cover with compression, reverbs, slap back delays, paning, using the automation and many more... But that will be cover in another posts. Lucas/LUMIC Studio

  • What has the biggest value in your studio?

    What has the biggest value in your studio? Your ears, monitors, and your room are your real tools. Not the plugins or hardware... These only help the real tools to work properly. While many of us focus (or get even obsessed...) with chasing new flashy plugins or buying more and more sophisticated and expensive gear, often forget or neglect aspects that have the biggest impact and biggest value in the music studio. Ears, monitors, and the room itself. We often hear ' Your entire mixing/mastering setup is as good as its weakest link'. In many cases it`s the monitoring environment. Although it`s not the most exciting and sexy thing, we should always focus on that first. Getting new gear and collecting new plugins believing that they will elevate our mixes to the next level sounds like fun but in reality it has much less of an impact than getting the monitoring environment right. Your studio could be lying to you... Room modes cause a null at your listening position, so effectively the room is EQ-ing around that frequency to make it a bit quieter, so that means you make it louder to get it to sound right to you. The top end has the same sort of problems too - speakers not reproducing the frequencies well and flutter echoes in the room masking things. If you’re not getting an accurate representation of the frequency response coming out of your speakers from the listening position, you’re not going to make the right decisions when mixing. A track may seem like it has too much in the low-mids, so you cut and cut and cut but you just can’t seem to get rid of that build-up at 200Hz or something. It's hard to pin it down to one thing without actually hearing the mixes vs. masters and not knowing your room etc., but I think monitoring and room treatment would be the first things to get right. If you can't hear it, you can't fix it. Even in a less than ideal room but with some basic treatment killing some room modes, you can get a long way with just spending a lot of time listening to really high quality commercial masters so you get used to the sound of the room. It can really help to be very careful about what level you are listening to things at too, because even a small change in volume can affect how we perceive the frequency balance. Before you get any new toys make sure your playground is right. Before you get any new toys either analogue or plugins (yes, it also applies to plugins) make sure your room is right. Cover at least the basics when it comes to acoustic treatment, it should be your priority first. Your entire mixing/mastering setup is as good as its weakest link. Even the fanciest gear will be good for nothing when your monitoring environment is not right. And sometimes even basic room treatment makes a huge difference killing some room modes. Same applies when your ears are not trained well enough. All the plugins and gear are only the tools, and even the best tools are good for nothing when you don't know how to use them. The other thing worth mentioning is your ears and how 'experienced' they are. The more you mix and the more experienced you're getting the more you realize that throwing millions of complex techniques into the mix or using plenty of flashy plugins is just pure bullshit. You develop your taste and train your ears. 'Less is more' becomes your technique. These days I think that people focus too much on thousands of superb and complex mixing techniques seen on YouTube and new flashing plugins rather than focusing on what's the most important... Art and the song itself. The message and emotions it carries and how to emphasise them. Just to show you that your ears and talent have the biggest value in your studio take a look at this.... You can have the most modern DAW and dozens of new flashy plugins but still the best mixed and best sounding album in music history is Michael Jackson's 'Thriller'. Mixed nearly 40 years ago, on a pair of Auratone 5c speakers. It shows that nothing beats talent and good ears. So... focus on the most important things first... Your ears, monitors, and your room are your real tools. Make them work... Happy mixing! Lucas/LUMIC Studio

View All
bottom of page
.