Files
hpr-knowledge-base/hpr_transcripts/hpr0803.txt

280 lines
31 KiB
Plaintext
Raw Normal View History

Episode: 803
Title: HPR0803: A novacut support call
Source: https://hub.hackerpublicradio.org/ccdn.php?filename=/eps/hpr0803/hpr0803.mp3
Transcribed: 2025-10-08 02:45:31
---
music
So Novakut, this really interesting new, hopefully great video editor project.
So Jason DeRose, one of my personal requirements or one of my personal hopes for Novakut is
to have really great color correction.
Now you talk.
Okay, I don't know if there was more to the question.
So I guess one thing that I can start with is that color correction is definitely very high on our list.
Probably first is the organization and a great cutting workflow.
Then some very basic audio tools and then color correction is probably the next thing we'll turn into.
We're pretty early on research stage of that.
So I think this is where if you have experience with certain, you know, color correction workflows and we love feedback on things you like, don't like, and so on.
So is as far as color correction is, is it more of an emphasis on trying to just get these shots equalized?
Or is there going to be a pretty heavily influence on actually doing artistic color grades?
Definitely the artistic color grades, although the equalization part is something that we have some interesting automation opportunities for.
So for those of you listening that don't know, similar lenses tend to be color matched so that.
If all other things being equal, you go from your 35 millimeter to your 15 millimeter lens, like the colors should look the same.
But with photographic lenses, they don't tend to do that.
And you know, there's different properties that are optimizing for us. The colors are a little different.
But because we have all the exit metadata, we do know like the lens that was used normally.
So we have the information we can do some right now that the goal is to be able to do some automatic color normalization.
So the color looks the string color looks the same between different lenses.
So kind of the idea of collecting and doing profiling on the lenses.
So are you going to is the plan to do here at Ristik's locally and learn what we do with shots that come the same kind of lens.
Yeah, so it would take buildings like calibration data or you know, basically have to set up like a controlled shot that we can do over and over again.
And get test data with different lenses, you know, different white balances also.
And then use that data to then do the correction.
So you're not going to like look at real time proof of yeah, how an end user actually uses the software and just kind of watch what they do and go.
Hmm, they consistently do this with this lens.
To make it and they do this with this lens.
Um, I guess we could, but I mean, it would be hard for that to be that useful because, you know, you don't know if the users just kind of normalizing or if you know they're doing the like you say that the artistic color correction.
Well, wouldn't honestly, I would think it wouldn't be that hard to heuristically figured it out because well, if you're doing the artistic color correction.
That's kind of I've got a shot laid out or I've got an entire scene laid out and I want to grade the entire scene as a whole where if I'm doing color correction, I'm doing it shot by shot.
So if I'm doing color correction since you're doing the shots and the slice, um, the slice mode should probably do the color grade in the shot kind of mode where you're in the slice, figuring your cuts out and then grading it.
Or some way you can grab and grade an individual shot where you would want to grade the entire sequence of images.
Yeah, the entire video sequence into, uh, then you do your, your artistic cut rates.
Right.
So that should be should in theory be trivial to figure out what he's doing.
And then you're going to have XF data or you should have XF data on every shot.
Right.
So you would be able to associate and see if there was a pattern of standard stuff that he was doing with the same lens.
But the other thing you got to think about is you're not always necessarily in the same shooting conditions.
So even if you've got two central lenses that are the same, but you're doing shots at a different day and a different lighting condition.
One day it was overcast one day it was bright sun.
You're going to have different white balance and whatnot to mess with.
Well, and it's why having the calibrated data from like a controlled test setup is important because otherwise, you know, like you said, the, uh, the situations are different.
But in terms of, you know, that the color normalizing you need to do based on physical properties of the lens.
I mean, that isn't that you can measure and then apply automatically.
So would this be something that we would, you could like end users could take the normal photography white balance boards and actually use to try to give you a baseline.
So you can build up a profile for the lens.
Yeah, pretty much, although it's not sure which like the great cards aren't enough because part of it's that no different colors have different transmission efficiencies pretty much on different lenses.
So it takes pretty complicated.
Estated, I think to do this. And again, you know, not an area I'm especially knowledgeable in, but I think it's a, it's a place where we could at least.
Speed of the workflow bit because if you can have things close to color normalized, then if you're making an adjustment.
And you know, as you're doing the artistic color grading, the same adjustment should produce, you know, close to the same effect across different shots that are from different lenses.
And then, you know, of course, the, the artist may make some tweaks to the afterward, but just the idea is like, um, reducing some of the manual work there.
Well, this brings up an interesting project they happen to see that might come into effect.
There was a, there's a tracker slash learning algorithm.
I'm just wondering if you actually like grabbed an object in the scene and put that tracker on it since this can actually determine the object in multiple scenes.
There's a shot slightly different. If it would be possible to go, hey, track this person.
And then you have the exact same person in another shot. And you could take that.
Just that person's image and try to match those because if they were wearing the same outfit, same costume, they should match.
And be able to use that for normalization, try to work those two massage them together.
Right. Um, yeah, I need to actually learn more about the terminology, but I think they call like, you know, if you're making color correction across the entire frame, they call that like stage one or layer one or something.
And then when you have a situation where, you know, you basically have like a rotoscoped area of the frame that you're applying different color correction to just like, you know, like a person space or the shirt or whatever.
They call it like layer two or stage two or something, but, um, yeah, definitely another great kind of automation thing.
Some of the current pro tools do that where it's like, you know, okay, I made the shirt with these color properties in this shot. Now match it in this next shot automatically.
Yeah, adventure resolve actually.
I don't know if it matches based on that, but I know it doesn't a lot.
I've never seen anybody demo it matching all I've ever seen is them demoing what they can do per shot basis.
And then they try to make the ideal color grade automatically.
But I've never actually seen a matching workflow heck, I don't think I've ever seen demo matching workflow.
Speaking of that, if anybody knows of one, they please link was it the Vimeo or Novacut artist on Vimeo?
Novacut is diaries.
Yes, if anybody could link or point us towards some of the color grading great tutorials that so we could see what other project other systems actually doing how they work, that would be awesome.
Now speaking of rotoscoping.
Can we please have a zoom?
You just mean like a crop pretty much right?
No, no, no, I was actually attempting the rotoscoping Katie in life.
And in the beginning of the shot where I was going to start doing the rotoscope, the object that was rotoscoping was really really small.
So it made it extremely difficult to start.
So just being able to zoom into the object to be able to start your rotoscope and then just keep doing the keyframes from that.
Oh, nice. Yeah.
That's a great feature idea.
It should be, it would be nice to be able to zoom in on any element so you can inspect it, especially if you're working with a very high res, especially if you're using a tablet interface where the res is not that great.
Come like an on the set. Let's check things kind of deal.
Not to mention a dot to dot zoom window would be really nice.
What do you mean by dot to dot zoom?
So say you're dealing with 4K or 5K material.
Just be able to mat basically zoom into a level where it was 1-1 pixel.
So I mean, you may be looking at a tiny portion of the image, but at least you can do focus checking and make sure the footage actually is usable.
Yeah, definitely.
So that would be one of the probably shortcake here some quick way to get into that.
And oh, cinema DNG.
Any plans for this? Any plans to approach it?
Any plans to deal with raw video at all?
Plans, definitely, just a matter of, I mean, you know, we're starting with can HDSLR in terms of what we'll do our full quality control on.
And then we'll just kind of pick whatever is that seems to be the second most commonly used camera kind of for our target users and then just work way down the list.
But so something like right now there's not that many cameras with the same in some DNG, but.
So you know, like that maybe lower priority than trying to support the red raw format.
Now, but cinema DNG could be because of its openness is probably a really awesome format to do archavels stuff in is it is a completely open specification that.
Whatever Adobe's plot role in it, it's actually open and since it's actually storing each image in a DNG file.
Any of your raw photography tools can work with it nearly all those have batch.
Or so you can edit masses.
So this could give some more options to people actually wanting to do additional work outside of Novakut speaking about working kind of with Novakut in the workflow with other tools.
What's the plans on that front?
So the short answer is yes, we will do that from from some feedback.
Recently, it seems like a definitely XML support is very high priority, which I think will be relatively easy to do.
Especially because you know, the way we describe things in our end is just really simple.
So it should be fairly easy to map and XML.
And then as far as image formats or you know video formats, it seems like.
What is it?
What are the products are and DPX are the formats that like a Hollywood workflow needs.
And you know, a lot of the amateur are not my amateur, but you know, low budget indie stuff for similar.
Thinking of Pro Sumer.
More now just lower budget, you know, like indie web series kind of stuff.
Yeah, I think that's in my opinion, the term that tends to follow as Pro Sumer is that they tend to use cameras that are a little bit more than your average.
They don't want to shoot kind of cameras, but not necessarily like full-fledged movie cameras tend to fall into that Pro Sumer category where they have some way to spend, but it's usually not a.
They have buckets of money instead of truckloads money.
I mean, shit, Leo Laporte, I think falls in this category.
Even though his buckets have got quite large.
There's a couple more things I was thinking about.
As far as working with other software, is there any plans or hopes that your metadata or the way you organize your clips could be exposed to like a file system level.
Like, so this is what we talked about before in terms of like using some links or whatever to put them in a relatively human readable layout, like the way a save shot will does by date or something like that.
There's something so I can take say I have a graphics or I have like, I need to import this stuff into Nuke and I wanted to put it all in one folder so I can just grab that folder and grab everything in it.
It's kind of the work flow that I do with photography.
When I just organize everything into one album, which ends up, DigiCam represents the albums as actual folders on the hard drive.
So I just open up Hugin or some other program I need to get to the images.
I can just grab everything in the folder and it's all right there.
And it's still in the DigiCam archive. So if I have to go back into it, DigiCam, I can go back into it and look at it.
Right. Yeah, that would be something easy to do.
And I think that's the kind of like exposing all the files for a specific project like that is a really good use case.
And then you can interoperate with programs that don't even need to care about what the media is doing in the backend or care about stuff.
They just see all the files in this folder and they go, okay, and you just open the folder.
I mean, this could even have a use case for daily's.
Hey, I got everything in this folder that I want to show my guy for daily's.
And I just grab the folder and throw it at our open VLC and grab that folder and just play everything in it.
Right.
Yeah. So fairly complete, you know, audio tools is definitely what's on the radar first.
You know, workflow automation, like, I'm actually sinking, you know, see a Zoom recorder with your HSLR, the way like pluralized as it's very high party also.
So it's something that when we get to it, we'll, you know, for one look at what other options or you know what other pieces are out there that we can use, like,
integrating with blender seems like a potential for a big win in terms of saving us a lot of work and also making, you know, an existing tool blender, making, making it stronger in terms of putting more options and it's kind of ecosystem workflow ecosystem.
Now, blender is definitely a great tool for everything from creating really interesting 2D graphics.
They're actually making 3D models and turning them into interesting 2D graphics to making no shit 2D graphics and creating nice 3D animations.
So speaking of blender, what other tools you think would actually fit in someone working with NovaCuts toolbox?
I mean, there's not really any limitation blender seems like it's the strongest contender right now.
You know, I think for some people, you know, say for, I know a lot people will use like Photoshop and some of their workflow steps and for some people get might be a valuable replacement, but for some people, I think it's definitely not right now.
So, I think it just depends on a lot on the individual artists needs, but I'd love to see a lot of creative tools, I rate it with D media, and then long term we want to try to split out some of the core kind of clad revenge bits also.
I might call it de-edit or something like that, but so, you know, we really want these components to be used in, you know, to be platform building components as far as splitting out the editing part at this point.
It just, it's not that clear to me where the division needs to be like, you know, where, where you make the cut off for what's useful to lots of applications and what is really so closely tied to NovaCut that it's, you know, it's not abstracted enough.
So, let that kind of slow roast for a while, but with D media at the very beginning, it was clear where that line was.
So, is there any plans to like integrate with Gimp, or maybe bring some of the Gimp libraries in, maybe bring some of it would be really interesting is to take a pan shot, and then use Hugen to make a really cool panorama out of that.
Yeah, so I believe there's been some preliminary work on integrating Giggle.
That's the, what was that going to even stand for.
So, it's basically the Gimp image processing library split off into standalone library.
So, there's been some work on integrating that into streamer, which is great because, you know, we really want to have standard kind of photographic processing available.
And the other great thing is that Giggle uses 32 bit, you know, linear light for its entire pipeline.
So, in terms of like doing the final hydraulic quality window renders or color correction or anything we need a lot of, a lot of frames work within the colors because, you know, you're going through a bunch of operations and you want to have something left at the end.
So, that's a great option, but it's something that we wouldn't really be doing something specific for Novacut.
It would just be, you know, making that work better in G streamer.
So, it would be theoretical, or it's probably going to be possible to take the exact same curve, save files that you use for Gimp, or you use in DigiCam, or some other floss tools, and actually integrate them into Novacut.
For part of your color grading steps.
I mean, it depends on whether an application has that functionality as a library that's reusable, or, you know, how they do that.
And I'm a big fan, you know, application should put all that stuff in the library.
You know, don't have it internal, and I guess that's one thing I'll say on Novacut twos.
We basically want the Novacut window back in to be nothing, you know, to be a small monoclucode that goes from the way we describe it, it's encouraged to be, and maps it into either, you know, non-linear, you know, the correct GES, the Duster Rediting Services kind of construction.
And as far as, you know, it can be image processing, audio processing, we want to do all that upstream and G streamer.
Sweet. Is there any plans to have nice transparency, like we already see in an open shot so we can at least apply titles and have transparent areas of it so we can put a tight, a nice graphic title and creating the game for Photoshop on top of a video.
Yeah, I think that that needs to be a high priority in terms of we're going to be careful to avoid, you know, that there's some features that are really just for home users.
I mean, some of the mistakes that Apple made with FCPX, you know, and the more I talk to providers, the more I, I realize the way in which, you know, adding all the IMV transitions into the pro editor was kind of insulting.
And just because, you know, people don't actually use that, but definitely titles are important.
Let me just figure out, it's kind of the best initial step for doing that.
Sweet.
Speaking of transitions, well, even if they're not in their right fault, will it be ways to bring in transitions in case us a little home users want to use it just for the heck of it.
May have to be an applicant.
I'm not sure right now, just because, you know, that we're lucky to have a special effects artist hanging out in the channel for a while who worked on Harry Potter and Sultan.
There's something else he has the IMDB credit for, I can't recall, but anyway, you know, he specifically asked us to not include silly transitions or how you phrase exactly.
But he said that in terms of, you know, it's like with what he does, it's like they want a film cross dissolve as the only transition.
And in terms of the way they do transition like stuff when they do want that, it's, you know, pretty much compositing like that they don't the pre-canned like wipes and stuff like that just aren't useful enough because, you know, that they don't want to use some stock thing that's musically in times they're going to, you know,
do their own motion graphics and whatever to make it how they want it.
All right, so it would be more of pretty a tool to create original transitions and maybe make some canned stuff out of that tool that could be imported.
And then you can give a tool to, there isn't necessarily going to cost a huge amount of money that people could use to create their own transitions that would look pretty canned because they wouldn't be pretty canned.
Right. And I guess the thing I'll say too is, you know, long term we definitely want the NovaCut render back in to be useful to stuff besides NovaCut.
I mean, I would personally love to see it be used for photography also because I think, you know, it's a thing that's kind of silly for an application to have internal and I love to see, you know,
just general plus gaggle will be able to use for processing pipeline so you could describe and it's on photos also in the NovaCut edit description and send that to the NovaCut server or the render server and have it spit out, you know, the result.
But anyways, to go back to the point I think I had there.
I think in terms of like when we get to the point where we have a little time to think about this in a little room to breathe, if, for example, there isn't a great home use targeted video editor out there, it's like, you know, why not build one on the NovaCut backend.
But I think there's definitely a line where you can't put everyone's needs as far as workflow and some key design issues.
So, I think it's you the point where it's better to have a user experience built for casual users and a user experience built for producers.
So, potentially, as the goal is to have it so separate, the UI can be pulled off in another UI can be ingrained or even pieces of the pieces of the UI can be changed out so it would work better for different use cases.
Right. Yeah, and actually that's not even a theory that's strongly enforced reality because of the way we use catch to be as an intermediary.
So, there's no way to like tie the UI into the render backend because we don't let them talk to each other at all.
And I think that's always people's intent when they make something like a video editor, but in practice, if you can cross the streams, you tend to, you know, because there'll be some little problems that go, well, it's easier to solve if I have the UI reach in and, you know, get dirty with some details of the juice to our pipeline or whatever.
But yeah, so the idea is that, you know, you could swap in a different render backend that was built on say like a different multimedia processing library.
And the UI wouldn't know the difference or you could slap a different UI on it and the render backend wouldn't know the difference.
Is there any plans to be able to take the output render and actually componentize it so it can be ran on a render farm?
Definitely. So there's actually some interesting, somewhat recent developments in GStreamer that I just came across recently, but we have a new type of Q element.
So the Q is used, the original Q is used to create like a thread boundary pretty much. So one side of the Q runs in one thread and one side on another thread. So they can be running on, you know, different core simultaneously.
But the Q2 writes to disk. And from what I understand, it's kind of a foundation that would allow you to split a render across multiple servers, but actually have a, you know, coherent GStreamer pipeline still.
And then the other thing too is the way we describe that it's is very much designed to allow us to pre-render any step.
So the input into a certain point in the editing graph, you know, could be its own pipeline that's built or it could be something pre-rendered that, you know, you're doing for like effects that are too complicated to do in real time or whatever like that.
So are we talking about running a live editor on a cluster or running post once the edits somewhat near finalized, actually running it in a render farm to get a final product up or somewhere in the middle there.
Well, definitely doing the final render on the cluster, doing like the PV rendering on a cluster, you know, it gets a little harder to
paralyze thing, paralyze things depending on, is that what you're doing. But, you know, in theory, like at least in terms of how the edit description works, it's designed to do both practical reality, like getting the real time from a cluster won't be a future we have rolling on.
But, you know, it's just that the possibilities are for down the road.
So if you do something that's fairly complicated to a sequence, you could just farm that out to a render farm or farm that out somewhere else in the cluster.
And then as soon as it gets done, get it integrated back into the live edit.
Right. And it's maybe even do that kind of transparent to the user. So, oh, look, it's done. It looks right.
Yeah, exactly. So it's things are definitely designed for continuous background rendering. So like you said, you may apply something and it's something that we can't really do a decent real time preview or maybe do like a downgraded preview of.
But then in the background, the render server would say, okay, this node in the graph changed. And now I'm going to pre-render this.
So let me go through it next time. It's not doing it on the fly. It's just playing, you know, video that was written to disk pretty much.
So this is leaving us into the exploding data problem.
What's the plan on managing real time render files and all that fun stuff?
The plan is that you won't have to manage them at all. So that's all being done with D Media.
We definitely want to have an option to have like a dedicated scratch disk, but that kind of stuff, both kind of IO performance and then also.
So that the D Media stores that your source files are on don't get, you know, unnecessarily fragmented because the background rendering is a lot of small little files kind of all the time.
But then all these files are just under, you know, D Media's purview so we can do smart things like it does with other stuff like, okay, so here's a bunch of PV renders that you haven't accessed in two years.
So this is their game for reclaiming space. So there's space for current PV renders that you're creating.
So let's say you're going to do a smart garbage collection of the fun video renderer renders and whatnot.
Exactly.
So would it be feasible, say, I have a really nice desktop computer. I mean, I don't have the 230 grand line around to throw it up.
A really big, fast SD PCI X card, but I do have like 16 gigs RAM. Of course, I'm on the system. I only really need four gigs RAM most run the system.
So I tossed 12 gigs into a virtual drive and use as a scratch. That'd be a feasible idea.
Yeah, definitely.
It takes some testing to figure out whether that is the best use of that RAM.
But you know, obviously, even if you are writing it to disk, like it's going to be in RAM initially until it gets flushed out by, you know, the final access.
But I mean, doing something like where we initially write it to like a temp FS in memory.
And then after a certain amount of time, when we know you actually are going to continue to use that, then we can write it to hard disk.
So like, see, see, make a change.
In just in case you immediately change it again. And so that we don't need that cash render.
You could first write to memory and then move it over.
All right, huh?
That could be a poor man's really, really fast scratch drive.
Yeah, because of X.
I've looked at price tags on some of these fast SD cards and RAM and the speed, even on slow RAM, makes a lot of these SD cards look or not SD cards, but SD cards look very cheap.
And they have very good performance numbers.
The problem is they're not non volatile.
Right, look for the background rendering. I mean, it doesn't matter if it's fault or not.
Yeah, you might have to have some kind of, hey, we're using it like this. Let's have some kind of auto save function when we can try to push it out to a slow disk so that there can be some kind of backup.
Completely loose track where you're at.
Make sure that the couch DB doesn't live in the RAM.
Okay, with with changes to the edit, that's totally different. That's all very much written disk continuously.
Yeah, you would definitely want Nova cut to be aware of the type of memory it was using. Speaking of that, external hard drives and D media.
Yeah, so there's a bit more work that needs to be done on this front that probably is going to be done this month, but early on in kind of the design work for D media,
and external hard drives are very important use case for us because, you know, if say you have a large existing project and you want to hand someone, you know, two terabytes of files, like being able to physically do that on a hard drive is really important.
So, the idea of a file store or what D media calls a file store behind external hard drive is a very first class feature, including, you know, D media will basically dump the corresponding couch DB documents onto the external hard drive also for, you know, for the files that are on that drive.
So, you have something that's totally self-contained on that drive.
Right off. Okay, you haven't tried to summarize everything we've talked about, so we can kind of wrap this up.
We're making something cool and there's lots of fun stuff to get involved in if you work on it.
I guess one thing I always like to talk about is kind of our design process if that would be something you wouldn't mind me trimming it on.
Yeah, whatever, definitely. Yeah, I definitely have pulled Jason aside and had interesting discussions with them on mumbles.
Anybody who would like to do that, more than welcome to jump on mumbles and talk about Nova cutter, anything else they would like to.
There is a very Linux cloud or proud hangs around here.
And Jason, I know when he has the opportunity, loves to talk to artists and try to understand their workflow issues and understand how to bake a tool that would be better suited for solving those.
Yeah, and that's, you know, that's a big part of why we don't have another cut you I get because in terms of making a great UI for target users,
I can't really do that without talking to target users a lot, especially because, you know, I'm not a professional storyteller.
Kind of the way stuff settled in.
Tara kind of does the UX research or the bulk of it, and so her approach is kind of an anthropological approach.
She just, you know, talks to people, you know, in person online, on video, and then works on boiling and that down just in two, okay, here's what users say they want in their own words.
And then I kind of do the next step, which is to go from that and, you know, try to get that into specific technical requirements and kind of prioritize based on how long a stipple take also, you know, so if there's a feature that's like, okay, users say they want this, but this could be five years of work.
And there's no feature users say they want just as much and, you know, it looks like maybe a couple months, so we'd have a guy start with the two month feature first.
And then I try to get things really clear in terms of the user intent, which way has to be expressed in the schema for how we describe the ads.
And you know, expect to get to get to go from kind of possibly some big descriptions to very concrete, like, okay, this clearly defines exactly what they're, what they're talking about, what needs to be expressed.
And then James comes in on the final step and goes from James actually loves working from the schema pretty much.
And then, you know, delay a great UI from the event.
You have been listening to Hacker Public Radio at Hacker Public Radio.
We are a community podcast network that releases shows every weekday Monday through Friday.
Today's show, like all our shows, was contributed by an HPR listener like yourself.
If you ever consider recording a podcast, then visit our website to find out how easy it really is.
Hacker Public Radio was founded by the Digital Dark Pound and the Infonomicom Computer Cloud.
HPR is funded by the binary revolution at binref.com, all binref projects are crowd-responsive by linear pages.
From shared hosting to custom private clouds, go to lunarpages.com for all your hosting needs.
Unless otherwise stasis, today's show is released under creative comments, attribution, share a line, free those own license.