Episode: 2310 Title: HPR2310: Kdenlive Part 6 Workflow and Conclusion. Source: https://hub.hackerpublicradio.org/ccdn.php?filename=/eps/hpr2310/hpr2310.mp3 Transcribed: 2025-10-19 01:09:08 --- This is HPR episode 2,310 entitled, KVN Live Part 6 workflow and conclusion. It is hosted by Gens and is about 19 minutes long and can rim a clean flag. The summary is a look at the final KVN Live project workflow and conclusion. This episode of HPR is brought to you by an Anasthost.com. Get 15% discount on all shared hosting with the offer code HPR15, that's HPR15. Better web hosting that's honest and fair at Anasthost.com. Hello again HPR listeners, this is Gettys with Part 6, the phone and article in this KDIN live series entitled Workflow and Conclusion. The top is included are introduction, the goal master, the render menu and the goal master, encoding workflow, post-production workflow and finally conclusion. So let's get going. Introduction Post-production is along an involved process. As these articles have demonstrated, KDIN live is capable of handling every step with efficiency and flexibility. In this article we will discuss the final export of the full project from KDIN live, as well as examine the overall free software workflow of post-production. At this point in the KDIN live project, all editing has been completed, the picture lock has been declared, color correction has finished, compositing has been perfected and titles inserted and the audio mix has been finalised. The only step left is to export the movie from KDIN live as a self-contained movie file. The first export of your movie should be a full quality, bit for bit copy of exactly what you see in KDIN live. This serves a few different purposes. 1. Ensures that you have a full quality, backup, goal master in brackets of your movie. 2. Ensures that everything you think you see in the smallish windows and relatively chaotic interface of your video editor is actually true. Regardless of how good an editor you are, there is just something different about sitting back and watching a movie without the ability to stop it and make a quick adjustment or a quick edit. This isn't something you should reserve for the very end of your project either. This is a step you probably want to do periodically throughout the edit. They are called sanity checks in the industry. The goal master. To get a goal master from KDIN live, click the render button in the main toolbar or access it via project menu render. As your destination, choose lossless forward slash HQ. Name your output file as your goal master or Rc1 if it's merely a release candidate or whatever notation you want to use. The render menu and the goal master. KDIN live uses FFMPEG as its render backend, offers FFV1 and Hough YUV as its full quality master formats. Choose one of these, open brackets, both are good, Hough YUV will be a larger file size as it uses PCM for audio while the FFV1 preset uses flat, close brackets and click the render to file button on the top left of the dialog box. I tend to use Hough YUV as I have had good experiences with it. Now that you have a full quality version of the movie, you can either use it as your transcoding source if you wish to encode or distribute manually or you can use KDIN lives interface. I use FFMPEG directly, mostly out of habit, but the KDIN live front end is to date the most sensible FFMPEG front end I've used. Encoding Encoding video for distribution is subject to artistic preference and per project requirements. There is not a single encodes best way to encode your project. You must consider your desired format, your intended delivery method and so on. KDIN live takes into account most of the usual delivery methods offering presets for DVDs, mobile devices, websites like YouTube and Vimeo and much more. From the user perspective, video codecs are mostly all created equal. Open brackets are gross oversimplification that probably has FFMPEG and KDIN live developers cringing close brackets. They take video and somewhat compress it and are daily played back on the desired device. So it's useless for a content creator to debate over which codec will be best for their film. Fact is, any codec will do. The desired factor will be subtleties like bitrate, keyframing and GPO size and frame size. The bitrate of a video determines how much information each frame contains. This affects not just the visual quality of the video, but also the ability of the video to be streamed over a network or even a device, open brackets, since the graphic chips of any device will have its limitations, close brackets. A very high bitrate like 3500 kilobits per second and higher up to about 5400 kilobits per second or 54 megabits per second is common on a medium light blue ray. DVDs have a bitrate of about 8000 kilobits per second. Internet video, obviously, varies greatly, depending on how much confidence the content provider has in the network connection to their audience. Variable bitrate help load the overall file size by throttling the bitrate during shots that don't actually require much information. A relatively still shot of a building for example is a lot less demanding on video playback than a high speed car chase. The best way to optimize this is to use the two pass option. This takes twice as long to encode, but the results are invariably better in both quality and file size. During the first pass, FFMPEG reviews the footage and plans out the optimal method for encoding. On the second pass, FFMPEG does the actual encoding. If you're using Decadian Live Render dialog box, the bitrate and number of passes are really all you can control, aside from choosing a video format and frame size. The free video formats X-Feed, MP4, Fiora and WebM are all excellent codecs, most of which are gaining widespread adoption. I would argue that 9 times out of 10, these codecs alone are sufficient for most distribution channels. That is, there are those devices that require special options, and for this you really simply have no choice but to encode into a format that you don't really own or control. Frame size has a very direct result on file size, so if your target in a specific file size, then the bitrate and frame size are the attributes you can target to help you achieve your goal. The lower the bitrate, the smaller your file size will be, and if you reduce your frame size by 50%, you'll often see nearly a 50% drop in file size. Of course, the trade off in both cases is quality. If you want to try some custom FFMPEG commands to run against your uncompressed gold master, then you can choose all the options you want, and then, rather than clicking the Render to File button, click the Generate script instead. This dumps the FFMPEG command that KDNLive has generated to a file on your hard drive. You can customise this script using your favourite text editor, and then run from the KDNLive Render dialog box via the Scripts tab. Workflow The bigger picture of post-production warrants some consideration, since KDNLive is only one piece of the puzzle in a diverse industry that is filled with developing technologies. These six articles have shown that KDNLive is poised to easily be a drop in solution for video editing, being able to ingest a variety of formats, combine all manner of visual effects, and be the final mix of all the different elements that go into a video production. The bigger question then is whether the new Linux is ready to be a multimedia production solution. Post-production workflow The general flow of post-production revolves around a video editor, but requires a number of additional, specialised applications. Before the media even gets ingested by the video editing application, it must be organised and sorted in some useful way. Repriature editing applications mostly encourage editors to do this within their application. This is often very helpful, as long as you're only accessing the media from within the application, which is really ever the case, even when the proprietary application claims to be a turnkey solution. Proper media management should be done in a farm manager, such as Dolphin or Nautilus or the Terminal of your choice. Here are some good principles that any good assistant editor knows. Stay organised, don't scatter your footage all over your computer and expect your project to retain its integrity for years to come. If your project is a quick one-off video that you're going to post out to the internet or render out to a hard drive and then delete the source files from your computer, then you might not need to bother with proper organisation. For a serious video project that is going to take more than an afternoon to edit, however, keep your files in one directory tree, keep them organised on your hard drive and back them up. 2. Name your files so that they are useful. MV1008087.AVI and DSC underscore 00101.Move are not appropriate file names for video clips upon which your project relies. In a perfect world, we'd all be assigned an assistant editor to watch all of our footage and carefully summarise what's in the take, what the take number is and then to name the clip accordingly. Until this happens, responsibility falls upon you, so watch your clips, give them logical names, such as Clatu Drinks Coffee underscore 21 MCU underscore 2 dot MTS, where Clatu Drinks Coffee is the scripted action, 21 is the scene number, MCU is the type of shot open brackets in this case medium close up close brackets and 2 is the take number. Stay with that convention and you'll never fail to know exactly what is in each shot and where in your project it should be. 3. Don't edit off a USB 2 drive. There are exceptions to this rule, but USB 2 really is too slow for serious high depth editing and even I find for standard depth editing once the project becomes very complex. Get an extra SATA controller if you have to or a SATA external drive or upgrade to USB 3, but try to avoid USB 2 for editing if possible. 2 existing projects have been developing lately, both geared towards media management on a larger scale. D media associated with the up and coming NovaCut video editor promises distributed workflows, and MIS from NIDA media provides editors with comprehensive details about shared media in a central media library. Once the media has been properly organised, it's safe to import it into your video editing application. KDN Live allows you to create folders within the project tree, use this to manage scenes. Also remember to make copies of each major cut of your work, versioning is important. Tranks coding, if it needs to be done, is an area in which GNU Linux excels. Between FFMPG and MENCODA, you will usually have no trouble getting video into a format that is easily edited. In fact, at the production facility where I work, it's a Linux box built at the fraction of the price of the computers around it, and yet 3 times as powerful. And is the main conversion station when video comes in that nothing else will edit, open brackets, eventually of course it will be Linux that will be used to edit everything in the first place, close brackets. After the project has been created in your editing application and the media is ready to be cut, the next step is obviously editing the film. As this article has shown, KDN Live handles this with ease. A few things to keep in mind in any editing application, KDN Live not accepted. 1. Whatever is in the timeline is in your RAM. In EMAX terms, the timeline is your buffer. If you are editing standard definition footage and have 4GB of RAM, then you surely be able to edit about half an hour footage with numerous cuts and clips without noticing any burden on your system. High-deaf footage on a 4GB of RAM is quite another story. Keep this in mind. If you must, edit your project in 2 or 3 scene chunks as convenient and then marry it all together in the end. 2. I've edited on both my laptop and my main workstation. The laptop is technically able to edit, which is convenient when not in the studio, but it's far more pleasant to edit on the workstation. Marketing ads, so in professional editors cutting their film on a small laptop, are doing just that, marketing. If you are about to embark on a serious video project, buy or build a computer appropriate to do the job, with multiple CPU cores, plenty of RAM and a healthy video card. 3. Unfortunately, my tests with the 3 video card drivers available for ATI and Nvidia have not yet proven to be capable of the same performance as the proprietary drivers. Intel cards are nice in this way, being both open source and depending on the chip capable. But if you are doing serious compositing, you will most likely require Nvidia or ATI. Hopefully, the free drivers will be able to develop quickly, so that they can be used for heavy lifting, or Nvidia or ATI will come to their senses and open source their drivers. This situation has improved since the article was written. Once the picture is locked or nearly locked, effects, composites and titles can be worked on. While KDN Live could do basic versions of all of these, there are better tools in the Gnu Linux world for the job. In fact, we have two excellent tools that again, equal or rival the tools available in the proprietary world, Blender and Singfing Studio. In the FX world, it is typical to work with image sequences rather than video. To export just a scene from your movie, to deliver to the composite artists, you can utilize in and out points in the timeline just as you would in the clip monitor. It is common to deliver the scene to the FX artist with handles, that is, a second or so of the shot leading into the FX shot and a few seconds of the shot following. In the render dialog box, choose loss-slash-4-slash-hq as your destination. Deselect the export audio option and save the clip to some location. Then in the terminal, convert the shot to a series of images with FFMpeg. Please see the article for the sample terminal command and its explanation. Titling can be done a few different ways. For animated title sequences, Blender or Singfing will probably be the best choice. If you are simply going to use static title cards, then you can also design your titles in Gimp and import a single PNG or TIFF image to your project. You might have to adjust for pixel aspect ratio. Open brackets i.e. design your titles in Gimp at 720 x 534 or so, such that when they import into 720 x 480 projects, they will appear properly proportioned. In the previous article, preparing the audio mix was discussed in detail. Free software offers a number of excellent options for achieving a professional sound mix, whether you know just enough to use audacity or prefer a full featured digital audio work station or Q tractor. With over 100 plugins available from the Ladsper and Carf projects and a few others, you'll have everything you need and quite possibly a lot more than you'd normally have if you had to pay for all of those features separately. When the audio and composites are ready, they are re-imported into KDN Live and integrated with the project. More than lightly, a few last minute revisions will be made by a picky director or the all-knowing producer in brackets they are always all-knowing, but more or less the project is finished. Export it as a lossless goldmaster and compress for your targeted distribution. Conclusion Geno Linux has refined its multimedia capabilities to being both user friendly, flexible, efficient and stable. It is a realistic platform for post-production and content delivery. It is also flexible enough to integrate into an existing non-Linux environment, with many tools such as blender and audacity and FFM pig being completely cross-platform. Start converting your post-production process today and discover true independent filmmaking. And that's the end of KDN Live Part 6, the final part in the series. As usual, your feedback and comments are welcome, this has been Geddes for Hacker Public Radio and I will speak to you again when I will be narrating a new series of information of interest to hackers and particularly focused on those in the community who develop web apps. Speak to you soon. You've been listening to Hacker Public Radio at HackerPublicRadio.org. We are a community podcast network that releases shows every weekday, Monday through Friday. Today's show, like all our shows, was contributed by an HPR listener like yourself. If you ever thought of recording a podcast, then click on our contributing to find out how easy it really is. Hacker Public Radio was founded by the digital dog pound and the infonomicom computer club and is part of the binary revolution at binrev.com. If you have comments on today's show, please email the host directly. Leave a comment on the website or record a follow-up episode yourself. Unless otherwise stated, today's show is released on the Creative Commons, Attribution, ShareLite, 3.0 license.