Arpeggiate by numbers

Workaday automatic composition and sequencing

January 8, 2015 — September 20, 2021

algebra
generative art
making things
music
stringology
Figure 1

Where my audio software frameworks page does more DSP, this is mostly about MIDI—choosing notes, not timbres. A cousin of generative art with machine learning, with less AI and more UX.

Sometimes you don’t want to measure a chord, or hear a chord, you just want to write a chord.

See also machine listening, musical corpora, musical metrics, synchronisation. The discrete, symbolic cousin to analysis/resynthesis.

Related projects: How I would do generative art with neural networks and learning gamelan.

Figure 2: Colin Morris’ SongSim visualizes lyrics, in fact, not notes, but wow don’t they look handsome?

1 Sonification

MIDITime maps time series data onto notes with some basic music theory baked in.

2 Geometric approaches

Dmitri Tymozcko claims music data is most naturally regarded as existing on an orbifold (“quotient manifold”), which I’m sure you could do some clever regression upon, but I can’t yet see how. Orbifolds are, AFAICT, something like what you get when you have a bag of regressors instead of a tuple, and evoke the string bag models of the natural language information retrieval people, except there is not as much hustle for music as there is for NLP. Nonetheless manifold regression is a thing, and regression on manifolds is also a thing, so there is surely some stuff to be done there. Various complications arise; For example: it’s not a single scalar (which note) we are predicting at each time step, but some kind of joint distribution over several notes and their articulations. Or is it even sane to do this one step at a time? Lengthy melodic figures and motifs dominate in real compositions; how do you represent those tractably?

Further, it’s the joint distribution of the evolution of the harmonics and the noise and all that other timbral content that our ear can resolve, not just the symbolic melody. And we know from psycho-acoustics that these will be coupled— dissonance of two tones depends on frequency and amplitude of the spectral components of each, to name one commonly-postulated factor.

In any case, these wrinkles aside, if I could predict the conditional distribution of the sequence in a way that produced recognizably musical sound, then simulate from it, I would be happy for a variety of reasons. So I guess if I cracked this problem in the apparently direct way it might be by “nonparametric vector regression on an orbifold”, but with possibly heroic computation-wrangling en route.

3 Neural approaches

Musenet is a famous current one from OpenAI

We’ve created MuseNet, a deep neural network that can generate 4-minute musical compositions with 10 different instruments, and can combine styles from country to Mozart to the Beatles. MuseNet was not explicitly programmed with our understanding of music, but instead discovered patterns of harmony, rhythm, and style by learning to predict the next token in hundreds of thousands of MIDI files. MuseNet uses the same general-purpose unsupervised technology as GPT-2, a large-scale transformer model trained to predict the next token in a sequence, whether audio or text.

Google’s Magenta produces some sorta-interesting stuff, or at least stuff I always feel is so close to actually being interesting without quite making it. Midi-me a light transfer learning(?) approach to personalizing an overfit or overlarge midi composition model looks like a potentially nice hack, for example.

4 Composition assistants

4.1 Nestup

Totally my jam! A context-free tempo grammar for musical beats. Free.

Nestup. cutelabnyc/nested-tuplets: Fancy javascript for manipulating nested tuplets.

4.2 Scaler

Scaler 2,

Scaler 2 can listen to incoming MIDI or audio data and detect the key your music is in. It will suggest chords and chord progressions that will fit with your song. Scaler 2 can send MIDI to a virtual instrument in your DAW, but it also has 30 onboard instruments to play with as well.

4.3 J74

⭐⭐⭐⭐ (EUR12 + EUR15)

J74 progressive and J74 bassline by Fabrizio Poce’s apps are chord progression generators built with Ableton Live’s scripting engine, so if you are using Ableton they might be handy. I was using them myself, although I quit Ableton for bitwig, and although I enjoyed them I also don’t miss them. They did make Ableton crash on occasion, so not suited for live performance, which is a pity because that would be a wonderful value proposition if it were. The real-time-oriented J74 HarmoTools from the same guy are less sophisticated but worth trying, especially since they are free, and he has a lot of other clever hacks there too. Do go to this guy’s site and try his stuff out.

4.4 Helio

⭐⭐⭐⭐ Free

Figure 3: Helio in action

Helio is free and cross-platform and totally worth a shot. There is a chord model in there and version control (!) but you might not notice the chord thing if you aren’t careful, because the UI is idiosyncratic. Great for left-field inspiration, if not a universal composition tool.

4.5 Orca

Free, open source. ⭐⭐⭐⭐

Figure 4: orca in action

Orca is a bespoke, opinionated weird grid-based programmable sequencer. It doesn’t aspire to solve every composition problem, but it does guarantee weird, individual, quirky algorithmic mayhem. It’s made by two people who live on a boat.

It can run in a browser.

4.6 Hookpad

⭐⭐⭐ (Freemium/USD149)

Hookpad is a spin-off of cult pop analysis website Hook Theory. I ran into it later than Odesi, so I frame my review in terms of Odesi, but it might be older. Compared to Odesi it has the same weakness by being a webapp. However, by being basically, just a webpage instead of a multi-gigabyte monster app with the restrictions of a web page, it is less aggravating than Odesi. It assumes a little (but not much) more music theory from the user. Also a plus, it is attached to an excellent library of pop song chord progression examples and analysis in the form of the (recommended) Hook Theory site.

4.7 Odesi

⭐⭐ (USD49)

Figure 5: odesi in action

Odesi has been doing lots of advertizing of their poptastic interface to generate pop music. It’s like Synfire-lite, with a library of top-40 melody tricks and rhythms. The desktop version tries to install gigabytes of synths of meagre merit on your machine, which is a giant waste of space and time if you are using a computer which already has synths on, which you are because this is not the 90s, and in any case you presumably have this app because you are already a music producer and therefore already have synths. However, unlike 90s apps, it requires you to be online, which is dissatisfying if you like to be offline in your studio so you can get shit done without distractions. So aggressive is it in its desire to be online, that any momentary interruption in your internet connection causes the interface to hang for 30 seconds, presenting you with a reassurance that none of your work is lost. Then it reloads, with some of your work nonetheless lost. A good idea marred by an irritating execution that somehow combines the worst of webapps and desktop apps.

4.8 Intermorphic

USD25/USD40

Intermorphic’s Mixtikl and Noatikl are granddaddy esoteric composer apps, although the creators doubtless put much effort into their groundbreaking user interfaces, I nonetheless have not used them because of the marketing material, which is notable for an inability to explain their app or provide compelling demonstrations or use cases. I get the feeling they had high-art aspirations but have ended up doing ambient noodles in order to sell product. Maybe I’m not being fair. If I find spare cash at some point I will find out.

4.9 Rozeta

Ruismaker’s Rozeta (iOS) has a series of apps producing every nifty fashionable sequencer algorithm in recent memory. I don’t have an iPad though, so I will not review them.

4.10 Rapid compose

Rapid Compose (USD99/USD249) might make decent software, but can’t clearly explain why their app is nice or provide a demo version.

4.11 Synfire

EUR996, so I won’t be buying it, but wow, what a demo video.

synfire explains how it uses music theory to do large-scale scoring etc. Get the string section to behave itself or you’ll replace them with MIDIbots.

4.12 Harmony Builder

USD39-USD219 depending on heinously complex pricing schemes.

Harmony builder does classical music theory for you. Will pass your conservatorium finals.

4.13 Roll your own

You can’t resist rolling your own?

  • sharp11 is a node.js music theory library for javascript with demo application to create jazz improv.

  • Supercollider of course does this like it does everything else, which is to say, quirkily-badly. Designing user interfaces for it takes years off your life. OTOH, if you are happy with text, this might be a goer.

5 Arpeggiators

6 Constraint Composition

All of that too mainstream? Try a weird alternative formalism! How about constraint composition? That is, declarative musical composition by defining constraints on the relations which the notes must satisfy. Sounds fun in the abstract but the practice doesn’t grab me especially as a creative tool.

The reference tool for that purpose seems to be strasheela built on an obscure, unpopular, and apparently discontinued Prolog-like language called “Oz” or “Mozart,” because using popular languages is not a grand a gesture as claiming none of them are quite Turing-complete enough, in the right way, for your special snowflake application. That language is a ghost town, which means headaches if you wish to use Strasheela practice; if you wanted to actually use constraint methods, you’d probably use overtone + minikanren (prolog-for-lisp), as with the composing schemer, or to be even more mainstream, just use a conventional constraint solver in a popular language. I am fond of python and ncvx, but there are many choices.

Anyway, prolog fans can read on: see Anders and Miranda (2010);Anders and Miranda (2011).

7 Random ideas

  • How would you reconstruct a piece from its recurrence matrix? or at least constrain pieces by their recurrence matrix?

8 Incoming

9 References

Anders, and Miranda. 2010. Constraint Application with Higher-Order Programming for Modeling Music Theories.” Computer Music Journal.
———. 2011. Constraint Programming Systems for Modeling Music Theories and Composition.” ACM Computing Surveys.
Baddeley, Adrian J, Møller, and Waagepetersen. 2000. Non- and Semi-Parametric Estimation of Interaction in Inhomogeneous Point Patterns.” Statistica Neerlandica.
Baddeley, A. J., Van Lieshout, and Møller. 1996. Markov Properties of Cluster Processes.” Advances in Applied Probability.
Bigo, Giavitto, and Spicher. 2011. Building Topological Spaces for Musical Objects.” In Proceedings of the Third International Conference on Mathematics and Computation in Music. MCM’11.
Bod. 2001. What Is the Minimal Set of Fragments That Achieves Maximal Parse Accuracy? In Proceedings of the 39th Annual Meeting on Association for Computational Linguistics. ACL ’01.
———. 2002a. A Unified Model of Structural Organization in Language and Music.” Journal of Artificial Intelligence Research.
———. 2002b. Memory-Based Models of Melodic Analysis: Challenging the Gestalt Principles.” Journal of New Music Research.
Boggs, and Rogers. 1990. Orthogonal Distance Regression.” Contemporary Mathematics.
Borghuis, Tibo, Conforti, et al. 2018. Off the Beaten Track: Using Deep Learning to Interpolate Between Music Genres.” arXiv:1804.09808 [Cs, Eess].
Borgs, Chayes, Cohn, et al. 2014. An \(L^p\) Theory of Sparse Graph Convergence I: Limits, Sparse Random Graph Models, and Power Law Distributions.” arXiv:1401.2906 [Math].
Boulanger-Lewandowski, Bengio, and Vincent. 2012. Modeling Temporal Dependencies in High-Dimensional Sequences: Application to Polyphonic Music Generation and Transcription.” In 29th International Conference on Machine Learning.
Brette. 2008. Generation of Correlated Spike Trains.” Neural Computation.
Briot, Hadjeres, and Pachet. 2017. Deep Learning Techniques for Music Generation - A Survey.” arXiv:1709.01620 [Cs].
Budney, and Sethares. 2014. Topology of Musical Data.” Journal of Mathematics and Music.
Colley, and Dean. 2019. Origins of 1/f Noise in Human Music Performance from Short-Range Autocorrelations Related to Rhythmic Structures.” PLoS ONE.
Collins, and Duffy. 2002. Convolution Kernels for Natural Language.” In Advances in Neural Information Processing Systems 14.
Croft. 2015. Composition Is Not Research.” Tempo.
Dean. 2017. Generative Live Music-Making Using Autoregressive Time Series Models: Melodies and Beats.” Journal of Creative Music Systems.
Di Lillo, Motta, and Storer. 2010. A Rotation and Scale Invariant Descriptor for Shape Recognition.” In 2010 17th IEEE International Conference on Image Processing (ICIP).
Eigenfeldt, and Pasquier. 2013. Considering Vertical and Horizontal Context in Corpus-Based Generative Electronic Dance Music.” In Proceedings of the Fourth International Conference on Computational Creativity.
Elmsley, Weyde, and Armstrong. 2017. Generating Time: Rhythmic Perception, Prediction and Production with Recurrent Neural Networks.” Journal of Creative Music Systems.
Gashler, and Martinez. 2011. Tangent Space Guided Intelligent Neighbor Finding.” In.
———. 2012. Robust Manifold Learning with CycleCut.” Connection Science.
Gillick, Tang, and Keller. 2010. Machine Learning of Jazz Grammars.” Computer Music Journal.
Gontis, and Kaulakys. 2004. Multiplicative Point Process as a Model of Trading Activity.” Physica A: Statistical Mechanics and Its Applications.
Goroshin, Bruna, Tompson, et al. 2014. Unsupervised Learning of Spatiotemporally Coherent Metrics.” arXiv:1412.6056 [Cs].
Graves. 2013. Generating Sequences With Recurrent Neural Networks.” arXiv:1308.0850 [Cs].
Hadjeres, and Pachet. 2016. DeepBach: A Steerable Model for Bach Chorales Generation.” arXiv:1612.01010 [Cs].
Hadjeres, Sakellariou, and Pachet. 2016. Style Imitation and Chord Invention in Polyphonic Music with Exponential Families.” arXiv:1609.05152 [Cs].
Hall. 2008. Geometrical Music Theory.” Science.
Harris, and Drton. 2013. PC Algorithm for Nonparanormal Graphical Models.” Journal of Machine Learning Research.
Haussler. 1999. Convolution Kernels on Discrete Structures.”
Herremans, and Chuan. 2017. Modeling Musical Context with Word2vec.” In Proceedings of the First International Conference on Deep Learning and Music, Anchorage, US, May, 2017.
Hinton, Osindero, and Bao. 2005. Learning Causally Linked Markov Random Fields.” In Proceedings of the 10th International Workshop on Artificial Intelligence and Statistics.
Hsü, and Hsü. 1991. Self-Similarity of the ‘1/f Noise’ Called Music. Proceedings of the National Academy of Sciences.
Huang, Vaswani, Uszkoreit, et al. 2018. Music Transformer: Generating Music with Long-Term Structure.”
Huron. 1994. Interval-Class Content in Equally Tempered Pitch-Class Sets: Common Scales Exhibit Optimum Tonal Consonance.” Music Perception: An Interdisciplinary Journal.
Hutchings. 2017. Talking Drums: Generating Drum Grooves with Neural Networks.” In arXiv:1706.09558 [Cs].
Jordan, and Weiss. 2002. Probabilistic Inference in Graphical Models.” Handbook of Neural Networks and Brain Theory.
Kaulakys, Gontis, and Alaburda. 2005. Point Process Model of \(1∕f\) Noise Vs a Sum of Lorentzians.” Physical Review E.
Kontorovich, Cortes, and Mohri. 2008. Kernel Methods for Learning Languages.” Theoretical Computer Science, Algorithmic Learning Theory,.
Korzeniowski, Sears, and Widmer. 2018. A Large-Scale Study of Language Models for Chord Prediction.” arXiv:1804.01849 [Cs, Eess, Stat].
Kroese, and Botev. 2013. Spatial Process Generation.” arXiv:1308.0399 [Stat].
Krumin, and Shoham. 2009. Generation of Spike Trains with Controlled Auto- and Cross-Correlation Functions.” Neural Computation.
Lafferty, and Wasserman. 2008. Rodeo: Sparse, Greedy Nonparametric Regression.” The Annals of Statistics.
Lee, Ganapathi, and Koller. 2006. Efficient Structure Learning of Markov Networks Using $ L_1 $-Regularization.” In Advances in Neural Information Processing Systems.
Liu, Chen, Wasserman, et al. 2010. Graph-Valued Regression.” In Advances in Neural Information Processing Systems 23.
Liu, Han, Yuan, et al. 2012. The Nonparanormal SKEPTIC.” arXiv:1206.6488 [Cs, Stat].
Liu, Lafferty, and Wasserman. 2009. The Nonparanormal: Semiparametric Estimation of High Dimensional Undirected Graphs.” Journal of Machine Learning Research.
Liu, Roeder, and Wasserman. 2010. Stability Approach to Regularization Selection (StARS) for High Dimensional Graphical Models.” In Advances in Neural Information Processing Systems 23.
Lodhi, Saunders, Shawe-Taylor, et al. 2002. Text Classification Using String Kernels.” Journal of Machine Learning Research.
Madjiheurem, Qu, and Walder. 2016. Chord2Vec: Learning Musical Chord Embeddings.”
Meinshausen, and Bühlmann. 2006. High-Dimensional Graphs and Variable Selection with the Lasso.” The Annals of Statistics.
———. 2010. Stability Selection.” Journal of the Royal Statistical Society: Series B (Statistical Methodology).
Møller, and Waagepetersen. 2007. Modern Statistics for Spatial Point Processes.” Scandinavian Journal of Statistics.
Montanari. 2015. Computational Implications of Reducing Data to Sufficient Statistics.” Electronic Journal of Statistics.
Moustafa, Schuurmans, and Ferrie. 2013. Learning a Metric Space for Neighbourhood Topology Estimation: Application to Manifold Learning.” In Journal of Machine Learning Research.
Papadopoulos, Pachet, Roy, et al. 2015. Exact Sampling for Regular and Markov Constraints with Belief Propagation.” In Principles and Practice of Constraint Programming. Lecture Notes in Computer Science.
Pollard. 2004. “Hammersley-Clifford Theorem for Markov Random Fields.”
Possolo. 1986. Estimation of Binary Markov Random Fields.” Department of StatisticsPreprints, University of Washington, Seattle.
Rathbun. 1996. Estimation of Poisson Intensity Using Partially Observed Concomitant Variables.” Biometrics.
Ravikumar, Pradeep D., Liu, Lafferty, et al. 2007. SpAM: Sparse Additive Models.” In NIPS.
Ravikumar, Pradeep, Wainwright, and Lafferty. 2010. High-Dimensional Ising Model Selection Using ℓ1-Regularized Logistic Regression.” The Annals of Statistics.
Reese, Yampolskiy, and Elmaghraby. 2012. A Framework for Interactive Generation of Music for Games.” In 2012 17th International Conference on Computer Games (CGAMES). CGAMES ’12.
Ripley, and Kelly. 1977. Markov Point Processes.” Journal of the London Mathematical Society.
Sethares. 1997. Specifying Spectra for Musical Scales.” The Journal of the Acoustical Society of America.
Sethares, Milne, Tiedje, et al. 2009. Spectral Tools for Dynamic Tonality and Audio Morphing.” Computer Music Journal.
Tillmann, Bharucha, and Bigand. 2000. Implicit Learning of Tonality: A Self-Organizing Approach. Psychological Review.
Tsushima, Nakamura, Itoyama, et al. 2017. Generative Statistical Models with Self-Emergent Grammar of Chord Sequences.” arXiv:1708.02255 [Cs].
Tymoczko. 2006. The Geometry of Musical Chords.” Science.
———. 2009. Generalizing Musical Intervals.” Journal of Music Theory.
van Lieshout. 1996. On Likelihoods for Markov Random Sets and Boolean Models.” In Proceedings of the International Symposium.
Veitch, and Roy. 2015. The Class of Random Graphs Arising from Exchangeable Random Measures.” arXiv:1512.03099 [Cs, Math, Stat].
Voss, and Clarke. 1978. 1/f Noise in Music: Music from 1/f Noise.” The Journal of the Acoustical Society of America.
Walder, and Kim. 2018. Neural Dynamic Programming for Musical Self Similarity.” In International Conference on Machine Learning.
Wasserman, Kolar, and Rinaldo. 2013. Estimating Undirected Graphs Under Weak Assumptions.” arXiv:1309.6933 [Cs, Math, Stat].
Witten, Daniela M, and Tibshirani. 2009. Extensions of Sparse Canonical Correlation Analysis with Applications to Genomic Data.” Statistical Applications in Genetics and Molecular Biology.
Witten, Daniela M., Tibshirani, and Hastie. 2009. A Penalized Matrix Decomposition, with Applications to Sparse Principal Components and Canonical Correlation Analysis.” Biostatistics.
Yanchenko, and Mukherjee. 2017. Classical Music Composition Using State Space Models.” arXiv:1708.03822 [Cs].
Yang, Chou, and Yang. 2017. MidiNet: A Convolutional Generative Adversarial Network for Symbolic-Domain Music Generation.” In Proceedings of the 18th International Society for Music Information Retrieval Conference (ISMIR’2017), Suzhou, China.
Yedidia, Freeman, and Weiss. 2005. Constructing Free-Energy Approximations and Generalized Belief Propagation Algorithms.” IEEE Transactions on Information Theory.