Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Happy to answer any questions about this release, and/or the future of Ardour.


Congratulations on the new release! I've seen some forum discussions on this in the past, and I'd imagine it's a frequently debated topic. However, I'd like to ask about the technical feasibility of implementing a feature similar to Ableton's 'Warp' within Ardour. I understand that Ardour and Ableton have fundamentally different architectures and that different DAWs can prioritize different workflows. Given the current state of the codebase and the development roadmap, I'm curious how realistic the implementation of BPM-synced time-stretching actually is or if it remains significantly outside the project's scope.

The biggest issue here is that the best library for doing audio warping (ZPlane) is not available to us. We already do realtime audio warping for clip playback, just like Ableton, using RubberBand (and might consider using Staffpad at some point, which we have available for static stretches).

However, following the tempo map is a very different challenge than following user-directed edits between warp markers, and neither RubberBand nor Staffpad really offer a good API for this.

In addition, the GUI side of this poses a lot of questions: do you regenerate waveforms on the fly to be accurate, or just use a GUI-only scaling of an existing waveform, to display things during the editing operation.

We would certainly like to do this, and have a pretty good idea of how to do it. The devil, as usual, is in the details, and there are rather a lot of them.

There's also the detail that having clips be bpm-synced addresses somewhere between 50% and 90% of user needs for audio warping, which reduces the priority for doing the human-edited workflow.


>do you regenerate waveforms on the fly to be accurate, or just use a GUI-only scaling of an existing waveform, to display things during the editing operation

just use GUI scaling, and only IF the prior is too challenging


You often want sample accurate waveform visualization when tuning samples that are time or pitch warped to set start and loop points at zero crossings to avoid clicks without needing fades.

Overwhelmingly, there's no such thing as a zero crossing. Your closest real world case is a point in time (between samples) where the previous sample is positive and next one is negative (or vice versa). However, by truncating the next sample to zero, you create distortion (and if the absolute value of the preceding sample is large, very significant distortion.

Zero crossings were an early myth in digital audio promulgated by people who didn't know enough.

Fades are always the best solution in terms of limiting distortion (though even then, they can fail in pathological situations).


There's definitely such thing as a zero crossing, it's where sign(x[n-1]) != sign(x[n]) (or rather, there's "no such thing as a zero crossing" in the same way there's no such thing as a peak). Picking a suitable `n` as a start/end point for sample editing is a judgement call, because what you're trying to minimize is the difference between two samples since it's conceptually a unit impulse in the sequence.

I don't think people who talk about zero crossings were totally misguided. It's a legitimate technique for picking start/end points of your samples and tracks. Even as a first step before BLEP or fades.


Theoretically, it makes sense (go look at any of the diagrams of what a "zero crossing" is online, and it totally does.

The problem is that sign(x[n-1]) != sign(x[n]) describes a place where two successive samples differ in sign, but no sample is actually has a value of zero. Thus, to perform an edit there, if your goal is to avoid a click by truncating with a non-zero sample value, you need to add/assign a value of zero to a sample. This introduces distortion - you are artifically changing the shape of the waveform, which implies the introduction of all kinds of frequency artifacts.

Zero crossings are not computed by finding a minimum between two consecutive samples - that would almost never involve a sign change. And if they are computed by finding the minimum between two consecutive samples that also involves a sign change, there's a very good chance that you'll be long way from your desired cut point, even if you ignore the distortion issue.

It really was a completely misguided idea. If the situation was:

     sign(x[n-2) != sign(x[n]) && x[n-1] == 0
then it would be great. But this essentially never happens in real audio.

> Thus, to perform an edit there, if your goal is to avoid a click by truncating with a non-zero sample value, you need to add/assign a value of zero to a sample.

No, you (the editor, not an algorithm) look at the waveform and see where the amplitude begins to significantly oscillate and place the edit at a reasonable point, like where the signal is near the noise floor and at a point where it crosses zero. There's no zero stuffing.

This kind of thing isn't computed, a human being is looking at the waveform and listening back to choose where to drop the edit point. You don't always get it pop-free but it's much better than an arbitrary point as the sample is rising.

I mean, you could use an algorithm for this. It would be a pair of averaging filters with like a VAD, but with lookahead, picking an arbitrary point some position before activity is detected (peak - noise_floor > threshold)) which could be where avg(x[n-N..n]) ~= noise_floor && sign(x[n]) != (sign(x[n-1]).


> You don't always get it pop-free but it's much better than an arbitrary point as the sample is rising.

I agree with this, but that doesn't invalidate anything I've said. When you or a bit of software decide to make the cut at x[n], you are faced with the near certainty that the x[n] != 0. If you set it (or x[n+1]) to zero, you add distortion; if you don't, the risk of a pop is significant.

By contrast, if you apply a fade, the risk of getting a pop is negligible and you can make the cut anywhere you want without paying attention to 1 sample-per-pixel or finer zoom level and the details of the waveform.


Thanks very much, this sub-thread has been illuminating for me, and has the compelling quality of being obvious-in-retrospect. I now wonder what my MPC is doing, exactly, when I make an action at what appears to be a zero point. Thanks.

It's not as if a constantly changing single-axis non-linear transform is trivial to accomplish in the GUI either :(

Just wanting to say thanks to the whole team for creating such an inspiring and useful creative tool!

I'm most excited to try the perceptual analizer, which was something I found always had disappointing performance in plugins.

Which of the new features would you say posed the most interesting engineering challenge?


Well, I can't answer for @x42 (Robin Gareus) but for me personally the refactoring of the Editor code so that we could have multiple "editors" was both interesting and hugely challenging.

I didn't want to replicate the code we already had for the Editor, and figuring out to refactor this took a lot of time and experimentation and failures. Although there are still some rough spots, in general I'm very happy with how things turned out.

Clip recording was also a bit of a challenge. It uses an entirely different mechanism than timeline recording, and as usual I got the basics working in a couple of days, followed by months of polishing (and likely, quite a few more to go as we get feedback from users).


> analizer

analyzer. I think analizer has a different meaning.


Just wanted to say thanks one note time!

We have been running Ardour 9 for a while now during band rehearsals. Currently 12 channels that we record and monitor in realtime with some effects on top.


I've got 9.0 running self built on Fedora 43; it's working OK, I have had a couple of crashes (which I can't figure out how to report); a seg: #1 0x00007f5085ea1663 in jack_port_type_to_ardour_data_type (jack_type=0x0) at ../libs/backends/jack/jack_portengine.cc:71 #2 0x00007f5085ea346a in ARDOUR::JACKAudioBackend::port_data_type (this=0x1ac84ad0, port=...) at ../libs/backends/jack/jack_portengine.cc:465 #3 0x0000000000e3c719 in PortGroupList::gather

which I think is a lack of cast check in port_data_type? (from IOSelector::setup_ports -> PortMatrix::setup_global_ports )

I also got an assert in; #5 0x00000000011fa927 in StartupFSM::check_session_parameters (this=0x3dcf00a0, must_be_new=true) at ../gtk2_ardour/startup_fsm.cc:740 #6 0x00000000011f80d2 in StartupFSM::dialog_response_handler (this=0x3dcf00a0, response=-3, dialog_id=StartupFSM::NewSessionDialog) at ../gtk2_ardour/startup_fsm.cc:267

I think this was opening an existing ardour project that I'd copied onto this machine and was the first run of ardour on this machine.


Sorry, we can't support self-builds (or distro builds).

If you want to check it out, there are free/demo builds at https://ardour.org/download

Bug tracker is at https://tracker.ardour.org/ (sorry that it requires a separate login, but hey, that's Mantis for you)

ps. if you downvoted this, you're welcome to offer support for the full 80+ external libraries in our build stack. Reach out here or at discourse.ardour.org ...


That's ok, but I think if you spend a minute looking at that backtrace you'll see it's pretty obvious where you need to add a check.

The problem is that not a single person has reported this before, so until it's confirmed as affecting the official builds from ardour.org, it can't be a priority for us.

Thank you Paul, for all the years you've been doing this. For the patience and keeping subscription for your binaries affordable. For how you managed to keep it opensource, alive AND expanding.

You likely could not buy a single coffee with my lousy subscription contribution over a decade ago - the more I have respect for how it was developed.


Hi! I have been happily using Ardour as a hobbyist since version 5. At the same time I also started learning Pure Data. I was wondering how difficult it would be to implement a feature similar to "The Grid" from Bitwig. I’m not sure whether this could be done as a simple plugin, or if it would require much deeper integration with Ardour.

Most likely we would do a closer-than-normal integration of Cardinal ....

https://cardinal.kx.studio/

You can already load Cardinal as a plugin and get the full scope of is power(s) (or VCV Rack if you paid for the "pro" version). You just don't get the GUI "integrated" into Ardour, and its tied to a specific track.

We might do this via I/O plugins (an existing Ardour feature), which would make the inputs & outputs of Cardinal be just like your hardware. Lots of details to that sort of design, however.

There is also PlugData which could theoretically be handled in a similar way.

What we will not try to do is to implement Yet Another Software Modular Environment ourselves. Cardinal/Rack (or even PD) are approximately infinitely better than anything we could or would do.


Plugdata (a rework of Puredata as an LV2 plug-in) fills that role pretty well

Do you test on different kernel preemption models? If so, do you feel PREEMPT_RT really gives an advantage over full preemption with threadirqs?

(Cyclictest gives me between a 3x and 5x worst-case latency improvement depending on the background load, but I'm not nearly musically skilled enough to try a real-world test.)


We don't care much about "full preemption" because the only threads that have time-critical behavior are all scheduled in the SCHED_FIFO and/or SCHED_RR classes. If you had other workloads that could benefit from preemption without using realtime scheduling, then full preemption could be the way to go.

We haven't really tested this sort of thing for quite a few years.


Every set of release notes that's intended to double as a press release needs to involve the judicious inclusion of a blurb that immediately explains what the hell the thing actually is.

    document.querySelector("#content .section-header + p").className = "date"

    document.querySelector("#content .date + p").outerHTML = (`
      <p>
        We are pleased to announce the release of Ardour 9.0.
      </p>

        <p style="font-style:italic; font-size:smaller; margin:2em
        4em">Ardour is a free and open-source digital audio workstation
        app that works cross-platform on Linux desktops, Mac OS, and
        Windows.  Get Ardour or get involved with the community at <a
        href="https://ardour.org/">Ardour.org</a>.</p>

      <p>
        Ardour 9.0 is a major release for the project, seeing several
        substantive new features that users have asked for over a long
        period of time. Region FX, clip recording, a touch-sensitive
        GUI, pianoroll windows, clip editing and more, not to mention
        dozens of bug fixes, new MIDI binding maps, improved GUI
        performance on macOS (for most)...
      </p>
    `)


Hi! I recently (2 weeks) chose this software to invest time into in order to make music/sfx for video games. Do you personally use this software to create music yourself? Just curious!

Not really. My only released music is a single album at https://pauldavismusic.bandcamp.com/album/suspended-generati... which was made almost entirely with VCV Rack (Ardour was used for some pretty minimal editing). I've also used it for a short podcast series ("Audio Developer Chats") that is currently offline. I do try to talk to musicians and engineers almost every single day about what they're doing, however.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: