On coding, tech, privacy, AI and whatever else comes to this mind.

  • Introducing Longplay 2.0

    I’m excited to launch a big 2.0 update to my album-focussed music player, Longplay. If you’re interested in the app itself and what’s new, head over to longplay.rocks or the App Store. In this post I want to share some of the story behind the update.

    The idea for 2.0

    Longplay 1.0 was released in August 2020. I had used the app for years before that myself, but I didn’t know how it would be received by a wider audience. I loved the kind of feedback that I got which helped me distill the heart of the app: Music means a lot to people, and Longplay helps them reconnect with their music library in a way that reminds them of their old vinyl or CD collections. It’s a wall of their favourite albums that has been with them for many years or decades. It’s something personal. The UI very much focussed on that part of the experience, and I wanted to keep that spirit alive, keep the app fun, while adding features that people and myself found amiss.

    The main idea behind 2.0 was to focus on the playing of music beyond a single album. 1.0 just stopped playback when you finished an album, but I wanted to stay in the flow – to either play an appropriate random next album or the next from a manually specified queue.

    The new features

    iPhone
    Collections (Video)

    With that in mind, I focused on the following features:

    • Collections: Create collections of albums and playlists, which is a great way to group albums by mood or language, and to tuck those sleep or children albums somewhere.
    • Album Shuffle: Automatically continue playing a random album, either by the selected sort order or by your selected collection.
    • Queue: Long-tap or drag albums to the queue to continue playing those, if you don’t want to shuffle.

    And while building this, a few things lead to other things:

    In order to control the playback queue, I switched from the system player to an application-specific player. That meant I also needed an in-app Now Playing view. This also meant that playback counts and ratings were no longer synced with the system, so I added a way to track those internally. That and the collections meant it was time to add sync using iCloud, and so that the playback counts don’t just stay in the app, you can connect your Last.fm or ListenBrainz account.

    Under-the-hood a lot changed, too, which isn’t visible to the user, but will make it easier to support more platforms – see further below. It let me add CarPlay support to this update, and, as a tester put it:

    Love how Longplay keeps getting better - the new CarPlay ability is so wonderful.


    Screenshot iPhone
    Track list
    Screenshot iPhone
    Colourful Now Playing


    Polishing the UI

    Longplay is a labour of love, and I enjoy polishing up the UI. This meant a lot of iterations (especially on iPad where I was goind back and forth), and I particular want to call out the constructive and great feedback I got from Apple designers during a WWDC Design lab, and the feedback from my beta testers. Special thanks here to Adrian Nier, who has provided lots of detailed feedback and some great suggestions.

    A particular fun feature to build was adding a shuffle button to the Now Playing view. This is a destructive action, so I wanted to make it harder to trigger than just a button press. I ended up with a button that you have to hold down, for it to start shuffling through the albums akin to a slot machine, and it then plays the album where you let go. It comes with visual and haptic feedback, and if it whizzed past an album that you wanted to play, you can also drag left to go back manually. In the words of Matt Barrowclift:

    That “shuffle albums” feature is insanely fun, it’s practically a fidget toy in the best possible way. Longplay 2.0’s the only player I’m aware of that makes the act of shuffling fun.

    iPhone
    Shuffling (Video)
    Screenshot iPhone
    Little dictionary of sort orders


    Pricing model

    Initially I was aiming for a paid upgrade, but when I compared my past sales to the effort, I decided to use that time for other things. So the app stays paid upfront for now, making 2.0 a free upgrade, but I bumped the the price a bit. I’ll likely revisit this down the track, though, as it’d be nice for potential users to have a way of trying (parts of) the app.

    What’s next

    I’m glad this update finally makes it out, as it’s been a long time in the making.

    Look out for another update coming soon for iOS 17, making the home screen widgets interactive.

    I love hearing feedback about the app and also suggests for features, so please get in touch with me or post on the feedback site. As for bringing the app to more platforms, macOS is in the works, seeing a life-size album wall in Vision Pro is pretty amazing, and people are asking for AppleTV support.

    Get the app/update on the App Store.


    Developing this app is incredibly fun, as it’s something I use myself almost every day. Here are my playback stats since I added the internal playback tracking middle of 2022:

    Thoroughly tested

  • Insights from Microsoft Build for app developers

    It’s conference season for developers and Microsoft’s annual conference Build just wrapped up. I haven’t paid particular attention to Build in the past, but it’s been interesting to follow this year due to Microsoft’s close collaboration with OpenAI and them pushing ahead with integrating generative AI across the board. My primary interest is in what tools they provide to developers and what overarching paradigms they – and OpenAI – are pushing.

    The key phrase was Copilots, which can be summarised as “AI assistants” that help you with complex cognitive tasks, and specially do not do these tasks fully by themselves. Microsoft repeatedly pointed out the various limitations of Large Language Models (LLMs) and how to work within these limitations when building applications. Microsoft presented a suite of tools under the Azure AI umbrella to help developers build these applications, leverage the power of OpenAI’s models (as well as other models), and do so in manner where your data stays private and secure, and is not used to train other models.

    I’ve captured my full and raw notes separately, but here’s a summary of the key points I took away from the conference are:

    • LLMs have their limits, but crucially aren’t aware of them, and users cannot be expected to be aware of this either. This requires a different approach to building applications, as out-of-the-box LLMs will appear more powerful than they are. As a developer you’ll need to add guardrails to your application to keep the model on task, complement it with your own data, and provide a way for users to understand the limitations of the model and make it easy for them to spot mistakes and correct them.
    • Several best practices are emerging for working with LLMs, in particular meta prompts and how to write those in a way to encourage the model to create accurate responses. This is a very iterative process, and you’ll want to test and evaluate your prompts and responses. Microsoft has a service called Prompt Flow that helps with this process.
    • Grounding the model on your data helps to minimise hallucination/confabulation/bullshitting. Fine-tuning the model is a powerful option, though it’s fairly elaborate and you can tell from the generated output how much was based on your fine-tuned data or the base model. Instead you can inject your data as context into the prompt, i.e., using the Retrieve Augmented Generation (RAG) approach. Azure will provide services for indexing your data both traditionally with keywords but also as vector embeddings, and then atomatically injecting them into the prompt.
    • Plugins are a powerful way to add dynamic information to your prompts, and to take action on systems. This is a very powerful approach, and I’m curious to see what the best practices will be here.
    • When integrating chat capabilities in your app you’ll want to consider content filtering to make sure the bot stays on topic and stays clear of topics of harmful, violent, or sexual nature. Microsoft has a Content Safety service that can be used to filter out undesirable content, across various categories and intensity levels. Generated content will be automatically filtered.

    The talk by OpenAI’s Andrej Karpathy is well worth a watch as it provides a good high level discussion of the process and data and effort that went into GPT-4, strengths and weaknesses of LLMs, and how to best work within those constraints. The talk by Microsoft’s Kevin Scott and OpenAI’s Greg Brockman is also worth a watch as it provides a good overview of the tools and services that Microsoft is providing to developers, and how to best use them; including some insightful demos.

    The main tools that Microsoft announced (though most are still in “Preview” stage and not publicly available):

    • Azure OpenAI Service provides broad access to OpenAI’s models, but also let’s you use your own models. It’s a managed service that takes care of the infrastructure and scaling, provides a simple API to access the models, and provides various add-on services on top so that you can focus more on your own data and your own application.
    • AI Studio can best be described as a user-friendly web interface to build and configure your own chatbot. Tell it what data to use, configure and evaluate the meta prompt, set content filters, test and deploy as a web app.
    • Prompt Flow is a web tool to help you evaluate and improve your meta prompts, and compare the performances of different variations, and do bulk testing of prompts, and evalute the prompts on various metrics.
    • Azure Cognitive Search is an indexing and search service for your data. Previously it worked on keyword based search, and is gaining support for “vector” search, which means that you can find matching documents by meaning or similarity, even if there’s no match on a keyword basis.

    Microsoft is leading the charge and benefiting from their partnership with OpenAI, and their long-term investments into Azure are clearly paying off. Their focus on keeping data safe and trusted, without having it used to train further models or to improve their services, is a key benefit for commercial users. I look forward to getting access to these tools and trying them out.

    From an app developers perspective nothing noteworthy stood out there to me, as the tools presented tools are all server-side and web-focussed. The APIs can of course be used from apps, but I can’t help but wonder what Apple’s take on this would look like – Apple seem preoccupied with their upcoming headset (to be announced in the next 24 hours), so their take on this might be due in 2024 instead. I’d be curious about what on-device processing and integrating user data in a privacy-preserving manner would enable.


  • Happy birthday to me ♬, sings this blog. 11 years!

  • State of AI, May 2023

    Discussions about dangers keep heating up:

    Historian Yuval Noah Harari has big fears about the impact of LLMs, despite their technical limitations – or maybe because of those.

    Godfather of AI and inventor of Deep Learning, Geoffrey Hinton, quits Google, so that he can openly voice his mind and concerns about AI. In particular, he’s cited by The Conversation to stop arguing about whether LLMs are AI, acknowledge that it’s some different kind of intelligence, stop comparing it to human intelligence, and focus on the real-world impacts that it’ll have in the near term: Job loss, misinformation, automatic weapons.

    Meanwhile on Honestly with Bari Weiss OpenAI CEO, Sam Altman, (read or listen) does not agree with the open letter asking his company to pause and let competitors catch-up, but he does share the concerns about downsides of LLM and asks for regulation and safety structures lead by the government, similar to nuclear technology or aviation. OpenAI tries to be mindful of AI security and alignment1, but Altman worries that competitors that are trying to catch up, cutting corners – indirectly making an argument for OpenAI to pause after all, so that competitors can catch-up without having to cut these corners. However, he also worries that other country efforts (China) ignore these questions. He also alludes that AI companies should not follow traditional structures, and democratically elected leaders for them would make more sense.

    Other good reads and listens on what’s happening lately:

    • Hard Fork discussion on Drake: Loved the discussion on what AI generated music by fans could mean for artists. There are certain songs I love listening to on repeat and a few years ago there was this “infinite song” web app floating around that stitched a song into a semi-generative infinite loop. I’d love a version of that which is using generative AIs, and possibly mixing different songs together.
    • MLC TestFlight runs an LLM locally on a modern iPhone. Remarkable that it works. While GPT4 is leaps and bounds ahead in terms of quality, those locally run model could catch up. LLMs likely follow an S-curve in terms of their quality. Makes me wonder if GPT4 is near the top or still near the start or middle.
    • Ars Technica - Report describes Apple’s “organizational dysfunction” and “lack of ambition” in AI: A rather damning report of how Apple is falling behind other companies in the explorations of the leaps that AI is taking, and the new paradigms it opens up. LLM-based improvements on iOS are expected earliest next year, i.e., September 2024. I’ve given up on Siri a while ago.
    • Is your website in an LLM: Great journalism here on digging into what makes up (some) LLMs. I was happy to see that a bunch of my blog got slurped up, as I hope it might help some developer encountering an obscure problem – though acknowledgement and source links would be appreciated.
    • Surprising things happen when you put 25 AI agents together in an RPG town: Report on the fascinating emergent behaviours of running multiple LLM-agents. Hard Fork has a fun discussion of it, too. Really curious how LLMs will impact games.
    1. AI alignment means making sure that, if AI exceeds human intelligence, it’s goals are aligned with the goals of humanity – and doesn’t treat us like ants. 


  • Tech Note: UIHostingConfiguration can cause UICollectionViewRecursion
  • Updated my Mac Setup page to reflect how my usage of tools has changed. I’ve since replaced Alfred with Raycast, and Backblaze with Arq. And I’ve stopped using Dropbox, mosh, and, you-were-great-at-the-time-but-I-really-do-not-miss-you, Carthage.

  • Second public beta for the upcoming Mac version of Longplay is ready.

    • 🆕 Track list in the Mini Player
    • 🆕 Control AirPlay from the Mini Player
    • 🐛 Better handling of albums where some tracks are playable and others are DRM-protected

    Any feedback is highlight appreciated.

  • Tech Note: Using Tree-sitter for syntax highlighting in Jekyll
  • One of my favourite uses of Maparoni is visualising live data and analysing it with the various formulas. Over on the Maparoni blog, I’ve written up a post about improvements that the latest beta brings to that. Turns out, writing formula autocompletion that feels right, is tricky!

  • Submitted the iOS 15 update for Longplay. I really like how the dynamic sizing of albums by different metrics turned out. See what else is new in 1.2 in the changelog.

subscribe via RSS or via JSON