Memories of Softimage

Softimage Docs-15.jpg

With the recent announcement of the EOL for Softimage XSI, it brought back memories from many years ago, when I was initially learning 3d Animation. Back in the late 90's, while I was working in advertising, producing TVCs, I was getting extremely interested in learning how to create the magic of CGI I had seen in Jurassic Park several years earlier. After convincing our production company we needed to get on the bandwagon, we eventually came back to the office with a beautiful Xmas present. A copy of Softimage 3D. The very software that created the T-Rex. (for some additional memories of the t-rex, see this article on )

At the time I was really developing as an editor, but I spent countless hours learning Softimage, and thought I would eventually make my living creating CGI. My career eventually went the editor/producer route, but to this day, I still spend a good amount of time keeping my 3d skills up-to-date. Today my software of choice is Maya, but there is a soft spot for Softimage. So much so, that I saved my copy of all the original software and documentation, as you can see here in these pictures. I entertained the idea of finally discarding them this past summer, but I am so glad I decided against it. I may do very little 3d animation on a professional level anymore, as I just do not have the time, my love for these software packages has not dwindled at all.

Shooting RAW with Canon 5DmkIII

For my tutorial on working with DaVinci Resolve to create editing dailies, click here.

These videos were shot using the Magic Lantern RAW hack on the Canon 5DmkIII. I used a 64gb Komputerbay CF card, which I ordered off B&H. For the above video, it was shot with the cheap Canon 50mm f1.8, which I then brought directly into After Effects, and pulled a few sliders to make sure the white balance was ok, and exported it for Vimeo. The thing that struck me more than anything was the insane amount of sharp detail that was being lost in the H.264 versions of the native mkIII. The workflow is actually quite easy, I am using Premiere Pro CC, and DaVinci Resolve for an offline/online workflow, like the old days. I actually think the biggest issue is storage. Many of the tutorials I am finding online right now for Premiere are no longer working because either Black Magic or Adobe made a change that has broken the old way of matching back your conform, so I will be doing a tutorial in the next few days to show you what is working for me. I may even see if I can get an FCPX or Avid workflow too, for those of you that may be interested in that.

The below video was also shot RAW, but the second shot (close up of the headlight) was used testing the crop-in mode (5x or 10x) now available thanks to Magic Lantern. This one I used the offline/online workflow with Resolve, where I put a quasi-70's type grade on it.

Editing a Network Television Special with Adobe Premiere

My edit room while editing "The Lymon Bostock Story"

I have used Premiere on and off for years now. Back in the late 90's and early 2000's, when I was editing at my day jobs on powerful machines running Discreet *Edit and Avid Media Composer, if I wanted a "home" system, I had to find a lower priced alternative. Enter Premiere with a DVStorm video card by Canopus. Great for doing side projects, as well as practicing my editing skills on my own time.

Then comes a little product called Final Cut. When that hit v4, it became my personal editor of choice, and Premiere was put into the closet for years. Being a heavy Photoshop and After Effects user, I have generally had a copy of Premiere hanging around, I just never bothered to install and try it. When the new "Pro" incarnation of Premiere hit CS5, there seemed to be enough there to take another look. While the features on the back of the box were certainly worth noting, it delivered a bit more in promise than reality. Version 6 it was at a point that I was ready to start using it for some of my side projects, and I did so as much as possible to get fluent again. Although I thought it worked really well, there was something about it that still did not feel right, but I couldn't put my finger on it. It felt a little like a toy. It got the job done, but the road was a bit bumpy getting there.


When they indicated at my day job at the Network that they thought the "Next" version of Premiere sounded like it was going to be worth looking at, I wasn't quite sure. I knew it worked well enough on a current MacBook pro with a local RAID system, but certainly had no level of confidence it could handle the incredibly stressful job of being our everyday editor of choice, with a team of editors, a very large network, and tens of thousands of hours of online footage. Although we are in the process of upgrading our workflow, we are still currently using the MacPro towers we had when we launched the Network in 2008. Nevertheless they installed the suite on our systems, and were asked to try it on our jobs whenever possible, to see how it goes. It pretty quickly became obvious that the one thing that the aging towers needed were new video cards to up the stakes on the Mercury Engine, to help make up for the older processors. Some new Nvidia cards, and it was up and running, at least as well as version 6 had before it. Then the new builds of "CC" started rolling in, and it was absolutely amazing how quickly things changed. Half the time you couldn't even emote exactly what was changing, but it was just feeling better and better. Little performance improvement here, small interface changes there, and 6 months later, Premiere CC felt like an entirely different product. I have never worked in software development, but I was surprised how fast the updates were upping the stakes. Its almost as if Adobe had a secret version of Premiere complete in a back room, and just would release a little bit of it each week to keep us excited. Certainly they couldn't be coding these changes this quickly. I was impressed and getting very excited. I knew the summer was coming and I was approaching the start of a show I really wanted to push through Premiere. Unfortunately the official release date for CC was too late for the start of our show, and I wasn't about to start a big project on a BETA version of Premiere, plus the AJA drivers weren't quite ready. Its a shame, because I really wanted the new multi cam stuff for that job, but I started that edit in FCP, and put Premiere on the back seat again.


In mid-July 2013, as I was wrapping up that job, I knew I had a few weeks before we would start on our next show, THE STORY OF LYMAN BOSTOCK, covering the 1978 murder of the California Angels star outfielder. Its a short 30-min special, but this particular project posed lots of challenges to us. We knew we had an air date coming in late September, but we had not shot a single frame of the planned 15-20 interviews. They would be shot at locations all around the country, on a multitude of different cameras. Canon C100, C300, C500, mkIIIs, a Sony F3, a Red Epic, and some Varicam. We also had 100's of hours of archival footage as well as some archive interviews from people who had passed. There were resolutions from SD up to 5K, and every codec you can think of under the sun. I decided I had to push some small jobs through Premiere, and make sure it would work ok of the larger project. For several weeks while Bostock was in pre-production, I did numerous small featurettes for the Hall of Fame, as well as our daily show, MLB Tonight. Not only was Premiere fast, it was stable, and that was a huge key for me. I have a tendency to crash FCP a minimum of 3 times a day, so this felt like a revelation.


As we began production on Bostock, my biggest concern was the amount of footage we were going to be bringing in, which can be a major issue for FCP. At the Network we bring in over 3500 hours of footage a week, and we keep months of footage online at all times, and I wanted much of this to be available to me in Premiere. I was also starting to get back interview shoots, many of them 2 and 3 cameras. The first thing I did was use Media Encoder to send low rez versions to our Grass Valley Asset system, LD Edit. From there, the Producers, assistant editors and I could make select reels or rough edits if need be, and XML them back into Premiere. Hit a little snag here in that Premiere would not properly read the Grass Valley XMLs, so I had to open them up in FCP, and use it as a cleaner to re-export. Not sure which side is most responsible here, but I suspect Premiere is not reading the pure FCP XML, and makes its own flavor of it. Once back in Premiere I would re-link back to the camera originals. Because of time, we were editing native as much as possible, at 1080p24. With the RED footage that started at 5K, I did export a 1080 ProRes file, as well as the proxy for our assets. For the 2 camera shoots, I would use the built in audio sync feature, new to CC, which worked great. For the more complicated shoots of multiple takes over numerous cameras, I would bring these into PluralEyes, sync, and export a sequence back to Premiere, which I would then turn into a Multicam clip.


With the large amounts of interviews in the show, I thought this would be a good opportunity to test and use Premieres Speech-to-text feature. Although we have transcripts and timecode breakdowns, I figured it would be nice to be able to search into the audio to quickly find bites I may be looking for. Overall, it worked ok, but not great. The quality of the translation was mediocre at best, which was a bit disappointing. The audio quality was great, and many of the subjects were very well spoken, but it still struggled. We do have transcripts, and you can use files to help Premiere along in the translation, but here is where my biggest issue comes…it only reads Adobe Story files. This is an unnecessary step, and feels like a power play that I would expect from Apple, but not Adobe. I have a text file sitting right there, I should be able to use that, as I do not have Story running on our main edit machines. Adobe should know we will use products because they are done well and useful, not because your forcing us into it. So I didn't use it, and the results were less than stellar. Another gripe was that you can only search single words, not phrases. So if you know the subject says "I went to the store", you would have to search "went" or "store", which seems a bit silly. Overall, this feature stills needs some work.


It was once we actually started editing that the benefits, speed and power of Premiere started to come through. Because it is codec agnostic, the only real decision you need to make when editing with CC is resolution and frame rate, and for this show we went 1080p24. Even though we need to deliver our final content at 720p60 XDCam for the network, we are now being sure to future proof as much as possible. When shooting a couple interviews on the C500, we did go 4K for some of it, and after the show is over, I will deal with that, and checking how viable of an option that is for the near future. All tests we did taking our 1080p24 out to 720p60 via Media Encoder has been very promising, and very fast, so that wasn't a concern. Just edit away, and bring it into XDCam land as the last step. Switching my keyboard to FCP7 settings when editing gets you about 90% of the way to feeling like your just using a better version of FCP7, which is why you often hear the internets referring to Premiere as FCP 8, and I do not think thats a negative. The timeline navigation is tremendous, and very fluid. I love the new way you deal with expanding the audio waveforms. It just works so smoothly, its apparent Adobe is either really listening, or some of the developers are really using the program. Its the little things about CC that are hard to pinpoint, but just makes things work so well. Keeping in mind our MacPros are almost 6 years old, its only going to be better in the near future for us. Creative Cloud has breathed new life into our aging systems, and its only going to get better. This being a half hour show, we actually have 22 minutes of content. We go about it by breaking our story down as best we can into 4 segments of about 6-7 minutes, and then watch it through with placeholders for stuff that hasn't been shot yet. I generally take a shot at an editors cut first, then have the producers come in, and we see what works and what doesn't. This is the critical time of the edit, not because you are making things look and sound good, but you are making sure the story is right, and we are telling it in the right way. For this show, it took us about 4 days to shuffle things around, cutting things out, and seeing whats missing. This is also the time we are looking whatever VO lines are needed to bridge information that the viewer needs.


Once we are satisfied that the show flows correctly, we make sure its segmented well for commercial breaks, with a running time of about 24 minutes or so. Its good for us to go into the online portion of the show about 2 minutes heavy. At this time, I am starting to lay on music, as well as pull green screens, and VFX. We have been using After Effects for VFX since day one. We also have SMOKE and NUKE when needed, but in this case, I would be taking care of the visual effects myself, right in the edit bay. Using DLL, I would be sending about 15 shots from Premiere to AE. My way of currently using DLL is to send it to AE, but immediately undo on the Premiere side. Its a quick way to get footage over, but it loses the actual connection. Because of network traffic and our aging machines, DLL has a tendency to hang a bit. I know its more an issue on our end than on Adobes end, because in my home studio, DLL works great, but thats on a current MacBook Pro. We do an amazing amount of green screen work for a network sports channel, so AE gets a good workout. I also had to use Mocha for some tracking to remove glares off a couple of photos, and to replace a monitor or 2. Also, at the top of each segment, Bostocks widow is seen going through an old scrapbook of photos, and in order to make a couple edits work for continuity, some photos she was holding had to be replaced. This post is more about Premiere as an editor, but AE, as always, holds its own against any compositing program out there. Sure, I wish it had nodes, but it gets the job done, and its just very familiar.


Although my background comes from audio production, the one thing I will not be doing for this show is the final audio mix. I really do not have the time, as we are getting close to delivery, but also we have a great audio engineer who works in one of our (2) Fairlight rooms. For those unfamiliar, Fairlight is a high end audio hardware/software combo, geared towards production. As we finalize the segments, I take them into Audition, and do a rough mix, but get it as close to being ready for air as possible. Due to time constraints, my mix is the one that appears on the press release/review copies. I put things as best an order as possible, then OMF it via the network to the Fairlight room. I spent a LOT of time learning Speedgrade over the past year, and I really, REALLY like it. Especially the color tools themselves, and I had full intentions of this being my color grading platform for this show. Unfortunately as editing went on, it became evident that it wasn't going to be an option, as due to the input/output workflow of using Speedgrade. Also, its tracking is very slow, and still doesn't come close to DaVinci in the Power Window area. Because the show was shot very flat, we are using the grading process to breath some life into the images. We really needed the ability to XML a timeline that conformed perfectly, and that is just not an option yet in Speedgrade. I will continue to use SG as much as possible for future projects, it just wasn't the right choice for this show. (NOTE: Just as I posted this, Adobe announced support for Premiere Pro Projects in Speedgrade will be available Oct 15th. Not in time for me to get this project ready, but I will certainly be using it going forward.)


We still have a couple weeks left on the online edit, as well as final approvals and such, but I can not say enough how well Premiere has held up. I'm gonna have to knock on some wood, but it has not crashed a single time during the edit of this show, and has done everything I asked and expected of it. I have personally been a Creative Cloud customer since Adobe introduced it, and love the way it works. Now that they have laid all the cards on the table, I look forward to large updates during the course of the year. The one place Adobe still falls short is media management, but seeing the way they have been listening, I am fairly certain they know we are asking, and I have full confidence they will get it right and deliver. Its been fast…Its been fun… feels like the NLE I have been waiting for. Anyone interested in watching the result, you can find the information here:

We can't all cure cancer, but . . .


I certainly do not blog often, and when I do, its usually about some new camera, tv show, movie, or piece of software I like. I never thought what would bring me back to write something was quite as serious as this post. 

Before this past April, I would classify my life as "comfortable". Working in the big city, in an industry I love, doing a job I would probably do for free, and getting paid a good living to do it. Not only that, I got to stay "home", and live on the Jersey Shore, where I grew up. Nothing like the word CANCER to open the door to reality for you. 

Back in early April, when we got the word that my wife did indeed test positive for breast cancer, a flood of emotions hit you in a wave like no other. The first is denial…"No, that can't be right." or "That only happens to people on tv". Next is fear….a level of fear I wish none of you ever have to feel in your lifetime. Cancer is a crazy scary word. And true fear can eat you inside like a monster. 

It was only once we digested the reality, and figured out how to take this head on and deal with it, did that fear turn into anger. And not anger at the world, or God, or anything like that. Anger at myself. Was it a bit of feeling sorry for myself? Absolutely….but it was still anger. 

When we checked into the hotel at Sloan-Kettering in NYC, there was a boy in the lobby, probably around 12 years old, about the same age as my son Nicholas. His head bald from the obvious chemo treatments he had been going through, looking very frail and weak. It was at that moment that my anger at myself went into overdrive. 

Back when I was in school, I actually did very well, despite the fact that, like many kids, I only did just enough to get by. Skate through, and move on. I thought to myself, that with even the slightest amount of effort, I could have went to medical school. I could have been an oncologist. I could be helping heal little kids with cancer, and I could be changing the world. Instead, I choose to make movies and TV shows. Really? this is the best I could do? Regardless of whatever professional success I have had, what a lazy SOB I felt like. Why would I want to do something as meaningless as this? 

It was the very next morning, and that very same little boy that changed my tune again. As I come down to the lobby, he was there again, sitting on the couch, playing on his iPad, smiling from cheek to cheek. So I sat down for a few minutes and watched him. It was amazing to see how happy he looked. At that moment, he didn't have cancer. He was a normal little boy, he was a Jedi Knight, or the captain of his school football team. For those few minutes, the crappy reality of the real world didn't exist. He was happy, and thats what he deserved. 

We all know Steve Jobs not only couldn't cure cancer, he ultimately succumbed to it. That doesn't mean he didn't change the world. Obviously we cannot all be Steve Jobs, nor do we have to be TV Producers, Movie Directors, Professional Athletes, or Rock Stars. A month or so ago when my sink had a major leak, my plumber was the most important person in the world. The same way that at that moment, that iPad was this little boys escape, and Steve Jobs was the most important person in the world. 

There are many great people that serve this world in ways I can't totally comprehend. Whether its the amazing doctors at cancer centers around the world, policemen and firefighters that rush into collapsing buildings to save people they never met, or teachers that teach our kids at much too low of a pay than they probably deserve. This is what they do, and they are damn good at it. 

I finally realized, all you need to do to change the world is just take whatever it is you do very seriously, and strive to be the best at it. If you do that, somewhere along the way, you too can change the world. Over 20 years ago, my 18 year old self decided that not only could I not cure cancer, I wasn't even going to try.  Now I know that if somewhere there is someone watching a show I worked on, and is being entertained for a few minutes, and escaping the sometimes harsh reality of real life, well then, I'm ok with that.

Please Apple....


I haven’t had any time to properly work with the new 10.0.3 version of FCPX, although I did apply the upgrade the day it came out. The first thing it did when I tried to open a recent project, it crashed, but we will not let that influence anything here today. As a matter of fact, what I am writing here is really not about the new version, its more about my ideas on how to fix FCPX so it will be much more universally useful to all aspects of media creation. Nothing in the recent update really changed my overall views of the program, as I have always expected Apple to add features, and I fully expect this to continue. I have been rather outspokenly crabby about the editing paradigm that Apple has adopted, but I do not think it cannot be saved. 

It would be very easy for me to just say things like “Give me tracks”, or “Fix Broadcast output”, but that would be a cop out, as well as stupid. The idea behind this is to give some specific gripes, and my ideas on how to fix it, but within the parameters of the ideals of FCPX. I will try to explain why my issues are troublesome to me when applicable, but I will assume some level of editing background. 

Without a doubt, if all Apple did was give me an on/off switch for Magnetic timeline, about 95% of my main issues would disappear. It is my understanding that an option to turn off Magnetic timeline would be a rather tall technical task, and also would most likely cause usability problems due to some decisions already made. So lets pretend we cannot ever get this, and come up with some ways to work within the confines of the Magnetic timeline, without actually turning it off. 

Feature Wish #1 - Lock clips 
Give us a way to lock a clip in place. Simply a right click, and select “LOCK”. The clip could then change color, and remain at that point in the timeline no matter what. The magnetic timeline could continue to work, with everything not locked sliding around it. Think of it as a rock in a river, sitting still, with water flowing on either side. 

Feature Wish #2 - “Connected clip” to time ruler, instead of another clip 
In keeping with the “Connected clip” style, give us the option to connect a clip to the time ruler, and not just another clip. NLE editing is not always about relationships between video/audio clips, it is often based on a relationship between time and the clip. This is just another way to accomplish what LOCK clips would give, so I guess either one would do. 

Feature Wish #3 - Allow to connect a clip to a connected clip 
Numerous issues could be solved by this. Its sort of a workaround to my Magnetic timeline issue, but if Apple will give it to us, I’ll take it. 

Feature Wish #4 - Give us more storylines, and allow them to gain “focus” 
There is a reason that all NLEs have worked with tracks forever. A single storyline is extremely limiting, even with connected clips. Apple made it even more limited than it has to be. The secondary storyline of connected clips is not a equal partner, in that you cannot edit into it using shortcuts the way you do on the primary track, so it becomes a mouse dragging, multiple step nightmare. We need a way to give storylines other than the primary as the main focus of our editing. This would be similar to the patch area of the legacy NLEs we have been using. Once your primary edit is done, you could then switch focus to your secondary storyline, and edit into that with all the same shortcuts and editing options in the program. At the same time, give us a few more storylines in a project. 

Feature Wish #5 - Audio “Music” Bed 
Often when stating one of the biggest issues with FCPX, editing things such as music videos, the feedback is often “Just make your music the primary storyline”. That's great, good thinking, but the reality is that doesn’t really work. As stated in #4, the primary storyline is total king. Once you give that up, you have caused issues of epic proportions. Much like iMovie before it, give us a separate “Audio Bed”, that is essentially separate from everything going on in the edit. It could be an audio track that is immune from the magnetic timeline. Its a bit of an amateur idea, but it could work. 

Feature Wish #6 - Separate the Source / Record Skimmer 
Overall I am not a real fan of the skimmer, but I do get why some people like it. I REALLY do not like it in the record area, but I tend to like it more when skimming the source area. Its a bit of a pain in the neck turning it on and off as I move the mouse around. Simply give us a way to enable/disable the skimmer in the source area/timeline independent of each other. 

Feature Wish #7 - Ability for the range selector to cross all clips in the timeline 
Right now the only way to take entire chunks out of your timeline is to group them all into a compound clip, set a range, delete, then open the compound clip again. Problem here…that is multiple extras steps involved to accomplish a simple, and common editing task. The even bigger issue is there currently are numerous bugs surrounding compound clips, so there is no guarantee you can “ungroup” everything back into a proper timeline. 

Feature Wish #8 - Ignore the skimmer when zooming the timeline 
(Skimmer on + touching mouse + Zooming timeline) = disaster. By far one of the coolest parts of becoming an experienced editor is the precision in which you learn to navigate to different spots in the sequence. This is an impossibility with FCPX. Observe the playhead or selected clip, and ignore the skimmer. Simple. 

Feature wish #9 - Group Audio “roles” together 
I’ll admit, roles went a long way to solving a ton of issues with ver 1.0 (or 10.0) of FCPX. Problem is, as video editors, we think visually. I often have anywhere from 12-24 “tracks” of audio in an edit. In FCPX, that can look like the equivalent of a hoarders living room. There needs to be a button that automatically brings all common Audio Roles visually together. If not for nothing else than our sanity. 

Feature Wish #10 - Save the in/out points of your source clip 
I know Apple keeps saying “Save it as a favorite”. Stop it, just stop it. Save the damn in/out points. There is no reason not to. As an editor, you know this to be true. Thank You. 

Feature Wish #11 - Allow Re-linking of Audio/Video 
Its real simple to separate the audio from the video, why is it so hard to put it back? Let us select a clip of video, shift-select a clip of audio, and join them. 

Feature Wish #12 - A true key-framing system 
The current way you go about keyframing in FCPX is a total disaster. I can edit on a 30” monitor, yet I have to keyframe in a little 50px area above the clip, that requires about 6 clicks to even get to. Give the ability to switch one of the interface areas to a keyframe timeline. 

Feature Wish #13 - Customized layouts 
The main layout in FCPX is not the way to go. I want to change it, and I want to ability to do so. 

Feature Wish #14 - Still Grab 
Should be a one step process, click a “Grab Still” button and it appears in the source area, available for editing 

Feature Wish #15 - Ability to change start/center/end point of a transition 
A right click in FCP7 at least gave the “Start/Center/End” options. Avid is even better, you can actually drag the location right in the interface for easily customizable transitions. For example, you can take 2 frames from the outgoing clip, and 22 frames for the incoming clip on a transition. That is awesome. At least get us back to where FCP7 was. 

Feature Wish #16 - Sync connections to separated audio 
Even though we may want to separate the audio and video, it should still have the ability to maintain some type of relationship to each other, and give us an indication of sync. 

Feature Wish #17 - Cross Fades on Audio Clips 
For the love of God, why does something so remedial require so many damn mouse clicks ? 

Nothing stated above is unreasonable, nor should there be any technical hurdles either. While many of the things here may actually be on Apples roadmap already, I get the sense some of them are not. Anything that disrupts Magnetic Timeline is probably a philosophical issue to Apple, so they may avoid taking any action in that regard. 

More than anything, give us options. That is the mark of a professional piece of software. Right now Apple is trying to toe the line of what they believe “Professional” editors need, and the lowest common denominator of the general consumer. They are afraid of making the program to difficult to use, but have alienated the core base of its users. Its been discussed time and time again on the net already, so we will not open that box again. They seem to be making some strides…whether its through public pressure, or because they have a true plan, has yet to be seen. 

Probably the biggest gain I saw with the 10.0.3 update was not multicam, nor the “quasi-broadcast out” system they have now. Believe it or not, its the decision they made to not have the default behavior of adding transitions change the length of your sequence. That is the way any professional editor would have it, and its great they figured that out. Lets hope that was just the first step, with many more to come.