{ "version": "https://jsonfeed.org/version/1", "title": "Adrian's Corner", "description": "I'm Adrian Schönig. This is my little corner of the web with information about my apps and sporadic blog posts.\n", "home_page_url": "https://adrian.schoenig.me", "feed_url": "https://adrian.schoenig.me/feed.json", "author": {"name":"Adrian Schönig","email":"adrian@schoenig.me"}, "items": [ { "id": "https://adrian.schoenig.me/blog/2023/08/31/longplay-2.0/", "url": "https://adrian.schoenig.me/blog/2023/08/31/longplay-2.0/", "title": "Introducing Longplay 2.0", "content_html": "
I’m excited to launch a big 2.0 update to my album-focussed music player, Longplay. If you’re interested in the app itself and what’s new, head over to longplay.rocks or the App Store. In this post I want to share some of the story behind the update.
\n\n\n\nLongplay 1.0 was released in August 2020. I had used the app for years before that myself, but I didn’t know how it would be received by a wider audience. I loved the kind of feedback that I got which helped me distill the heart of the app: Music means a lot to people, and Longplay helps them reconnect with their music library in a way that reminds them of their old vinyl or CD collections. It’s a wall of their favourite albums that has been with them for many years or decades. It’s something personal. The UI very much focussed on that part of the experience, and I wanted to keep that spirit alive, keep the app fun, while adding features that people and myself found amiss.
\n\nThe main idea behind 2.0 was to focus on the playing of music beyond a single album. 1.0 just stopped playback when you finished an album, but I wanted to stay in the flow – to either play an appropriate random next album or the next from a manually specified queue.
\n\nWith that in mind, I focused on the following features:
\n\nAnd while building this, a few things lead to other things:
\n\nIn order to control the playback queue, I switched from the system player to an application-specific player. That meant I also needed an in-app Now Playing view. This also meant that playback counts and ratings were no longer synced with the system, so I added a way to track those internally. That and the collections meant it was time to add sync using iCloud, and so that the playback counts don’t just stay in the app, you can connect your Last.fm or ListenBrainz account.
\n\nUnder-the-hood a lot changed, too, which isn’t visible to the user, but will make it easier to support more platforms – see further below. It let me add CarPlay support to this update, and, as a tester put it:
\n\n\n\n\nLove how Longplay keeps getting better - the new CarPlay ability is so wonderful.
\n
Longplay is a labour of love, and I enjoy polishing up the UI. This meant a lot of iterations (especially on iPad where I was goind back and forth), and I particular want to call out the constructive and great feedback I got from Apple designers during a WWDC Design lab, and the feedback from my beta testers. Special thanks here to Adrian Nier, who has provided lots of detailed feedback and some great suggestions.
\n\nA particular fun feature to build was adding a shuffle button to the Now Playing view. This is a destructive action, so I wanted to make it harder to trigger than just a button press. I ended up with a button that you have to hold down, for it to start shuffling through the albums akin to a slot machine, and it then plays the album where you let go. It comes with visual and haptic feedback, and if it whizzed past an album that you wanted to play, you can also drag left to go back manually. In the words of Matt Barrowclift:
\n\n\n\n\nThat “shuffle albums” feature is insanely fun, it’s practically a fidget toy in the best possible way. Longplay 2.0’s the only player I’m aware of that makes the act of shuffling fun.
\n
Initially I was aiming for a paid upgrade, but when I compared my past sales to the effort, I decided to use that time for other things. So the app stays paid upfront for now, making 2.0 a free upgrade, but I bumped the the price a bit. I’ll likely revisit this down the track, though, as it’d be nice for potential users to have a way of trying (parts of) the app.
\n\nI’m glad this update finally makes it out, as it’s been a long time in the making.
\n\nLook out for another update coming soon for iOS 17, making the home screen widgets interactive.
\n\nI love hearing feedback about the app and also suggests for features, so please get in touch with me or post on the feedback site. As for bringing the app to more platforms, macOS is in the works, seeing a life-size album wall in Vision Pro is pretty amazing, and people are asking for AppleTV support.
\n\nGet the app/update on the App Store.
\n\nDeveloping this app is incredibly fun, as it’s something I use myself almost every day. Here are my playback stats since I added the internal playback tracking middle of 2022:
\n\nIt’s conference season for developers and Microsoft’s annual conference Build just wrapped up. I haven’t paid particular attention to Build in the past, but it’s been interesting to follow this year due to Microsoft’s close collaboration with OpenAI and them pushing ahead with integrating generative AI across the board. My primary interest is in what tools they provide to developers and what overarching paradigms they – and OpenAI – are pushing.
\n\nThe key phrase was Copilots, which can be summarised as “AI assistants” that help you with complex cognitive tasks, and specially do not do these tasks fully by themselves. Microsoft repeatedly pointed out the various limitations of Large Language Models (LLMs) and how to work within these limitations when building applications. Microsoft presented a suite of tools under the Azure AI umbrella to help developers build these applications, leverage the power of OpenAI’s models (as well as other models), and do so in manner where your data stays private and secure, and is not used to train other models.
\n\nI’ve captured my full and raw notes separately, but here’s a summary of the key points I took away from the conference are:
\n\nThe talk by OpenAI’s Andrej Karpathy is well worth a watch as it provides a good high level discussion of the process and data and effort that went into GPT-4, strengths and weaknesses of LLMs, and how to best work within those constraints. The talk by Microsoft’s Kevin Scott and OpenAI’s Greg Brockman is also worth a watch as it provides a good overview of the tools and services that Microsoft is providing to developers, and how to best use them; including some insightful demos.
\n\nThe main tools that Microsoft announced (though most are still in “Preview” stage and not publicly available):
\n\nMicrosoft is leading the charge and benefiting from their partnership with OpenAI, and their long-term investments into Azure are clearly paying off. Their focus on keeping data safe and trusted, without having it used to train further models or to improve their services, is a key benefit for commercial users. I look forward to getting access to these tools and trying them out.
\n\nFrom an app developers perspective nothing noteworthy stood out there to me, as the tools presented tools are all server-side and web-focussed. The APIs can of course be used from apps, but I can’t help but wonder what Apple’s take on this would look like – Apple seem preoccupied with their upcoming headset (to be announced in the next 24 hours), so their take on this might be due in 2024 instead. I’d be curious about what on-device processing and integrating user data in a privacy-preserving manner would enable.
\n", "date_published": "2023-06-04T00:00:00+10:00", "date_modified": "2023-06-04T00:00:00+10:00" }, { "id": "https://adrian.schoenig.me/blog/2023/05/05/state-of-ai/", "url": "https://adrian.schoenig.me/blog/2023/05/05/state-of-ai/", "title": "State of AI, May 2023", "content_html": "Discussions about dangers keep heating up:
\n\nHistorian Yuval Noah Harari has big fears about the impact of LLMs, despite their technical limitations – or maybe because of those.
\n\nGodfather of AI and inventor of Deep Learning, Geoffrey Hinton, quits Google, so that he can openly voice his mind and concerns about AI. In particular, he’s cited by The Conversation to stop arguing about whether LLMs are AI, acknowledge that it’s some different kind of intelligence, stop comparing it to human intelligence, and focus on the real-world impacts that it’ll have in the near term: Job loss, misinformation, automatic weapons.
\n\nMeanwhile on Honestly with Bari Weiss OpenAI CEO, Sam Altman, (read or listen) does not agree with the open letter asking his company to pause and let competitors catch-up, but he does share the concerns about downsides of LLM and asks for regulation and safety structures lead by the government, similar to nuclear technology or aviation. OpenAI tries to be mindful of AI security and alignment1, but Altman worries that competitors that are trying to catch up, cutting corners – indirectly making an argument for OpenAI to pause after all, so that competitors can catch-up without having to cut these corners. However, he also worries that other country efforts (China) ignore these questions. He also alludes that AI companies should not follow traditional structures, and democratically elected leaders for them would make more sense.
\n\nOther good reads and listens on what’s happening lately:
\n\nAI alignment means making sure that, if AI exceeds human intelligence, it’s goals are aligned with the goals of humanity – and doesn’t treat us like ants. ↩
\nRotating a particular screen in one of my iPhone apps from portait to landscape resulted in a half black screen, a crash, and the following mouthful of an error being logged:
\n\nTerminating app due to uncaught exception 'NSInternalInconsistencyException', \nreason: 'UICollectionView (...) is stuck in its update/layout loop. This can\nhappen for various reasons, including self-sizing views whose preferred \nattributes are not returning a consistent size. To debug this issue, check the\nConsole app for logs in the \"UICollectionViewRecursion\" category. In particular,\nlook for messages about layout invalidations, or changes to properties like\ncontentOffset (bounds.origin), bounds.size, frame, etc.\n
After quite a while of reviewing my collection view’s layout and views, and commenting out different blocks, I uncovered what’s causing the crash. It’s a combination of the following1:
\n\n.frame()
modifier anywhere in those SwiftUI views.sectionInset
.In my testing this only crashes on an iPhone, but iPad is fine.
\n\nIt is rather disappointing as I switched from a pure SwiftUI-based grid view to one backed by UICollectionView, so that I can use multi-item selection on iPad, as that’s not yet available through SwiftUI.
\n\nI did not find an easy fix. Avoid setting a section inset is a quick fix, but might make achieving the layout you want harder. I will see how I go with that, or I might end up going all SwiftUI for this view on iPhone (as multi-item selection isn’t that critical for my use case there) and keep it based on UICollectionView on iPhone.
\n\nAs confirmed in a simple sample project, and reported to Apple as FB11986452. ↩
\nWhile exploring a particular rabbit hole for Maparoni, I came across this challenge:
\n\n\n\n\n\n\nHow can you use a Tree-sitter grammer for syntax highlighting in a Jekyll blog – or, for that matter, just any HTML page by using JavaScript?
\n
Jekyll is implemented in Ruby; however, while there is a repo with Ruby bindings for Tree-sitter, these are out-of-date (last commit in early 2020) and are for a previous version of Tree-sitter. The question of this challenge was also raised there, pointing at the answer, which is:
\n\nThis is not well documented and required a lot of trial and error to get it working. Here are my steps, for anyone who faces the same challenge.
\n\nThe main assumption here is that you have a tree-sitter grammar for a language that isn’t covered by existing syntax highlighting, such as Pygments, Rouge, or Highlight.js. In my case, I wrote1 a tree-sitter grammer for Maparoni’s formulas. The app uses that for syntax highlighting in its built-in formula editor, and I want to use the same syntax highlighting for the documentation on the website.
\n\nThe second assumption is that you only care about highlighting code in that language, and not any other language. Handling multiple languages with this method, or handling this on top of another syntax highlighter are separate issues.
\n\nEither disable all syntax highlighting by editing your _config.yml
:
kramdown:\n highlighter: none\n syntax_highlighter: none\n
Or on a per-page basis by adding this line:
\n\n{::options syntax_highlighter=\"nil\" /}\n
First, get the tree-sitter.js
and tree-sitter.wasm
from the official web bindings. This will provide the main parser and query functionality. Put them in an appropriate place, e.g., your assets/js
folder.
Then load these, by adding this to the relevant head template <head>
:
<script type=\"text/javascript\" src=\"/assets/js/tree-sitter.js\"></script>\n
Next, you’ll need two files from the repo of your tree-sitter grammar:
\n\ntree-sitter-MyLanguage.wasm
for the language of your choice, i.e., the custom tree-sitter grammar.highlights.scm
file for the language of your choice, i.e., the queries that the syntax highlighter will need. This is typically in a queries
folder.Also put these into an appropriate place, such as as assets/js/tree-sitter-MyLanguage.wasm
and tree-sitter-MyLanguage/highlights.scm
.
With that, we can get going. I’ll walk step-by-step how to build the syntax highlighter.
\n\nConfigure a Parser object and provide it your specific language:
\n\nconst Parser = window.TreeSitter;\n(async () => {\n await Parser.init();\n const parser = new Parser();\n const MyLanguage = \n await Parser.Language.load('/assets/js/tree-sitter-MyLanguage.wasm');\n parser.setLanguage(MyLanguage);\n // ...\n});\n
Let’s assume our code sits in a HTML element of the class .language-MyLanguage
. We can grab these elements using and then tell the parser to parse them:
// ...\nconst codeBlocks = document.querySelectorAll('.language-MyLanguage');\ncodeBlocks.forEach((el) => {\n const tree = parser.parse(el.innerHTML);\n console.log(tree.rootNode.toString());\n // ...\n});\n
That print the syntax tree for each of your code blocks.
\n\nOne thing you might encounter here is that characters such as <
would be converted to <
, which the grammar probably won’t handle. So let’s fix that by decoding that html:
function htmlDecode(input) {\n var doc = new DOMParser().parseFromString(input, \"text/html\");\n return doc.documentElement.textContent;\n}\n
And we can then call use htmlDecode(el.innerHTML)
rather than el.innerHTML
direction.
Now that we have a syntax tree for the code, let’s highlight it. We’ll need to use the Query API from the tree-sitter web bindigns, which isn’t well documented at this stage, but we can see how to use it from the test suite.
\n\nThis is where the highlights.scm
comes into play. Let’s grab its contents, tell it to match against the syntax tree, and iterate over the matches:
// ...\nlet response = await fetch('/assets/js/tree-sitter-maparoni/highlights.scm');\nlet highlights = await response.text();\nconst query = Maparoni.query(highlights);\n\nquery.matches(tree.rootNode).forEach((match) => {\n console.log(match);\n // ...\n});\n
This provides the code that was matched (by start and end indices) to the matching query name, such as “function”, “keyword” or “constant”. We can use that to build a new HTML string for the code block that adds CSS classes to each match.
\n\nconst code = htmlDecode(el.innerHTML);\nconst tree = parser.parse(code);\n\nvar adjusted = \"\";\nvar lastEnd = 0;\n\nquery.matches(tree.rootNode).forEach((match) => {\n const name = match.captures[0].name;\n const text = match.captures[0].node.text;\n const start = match.captures[0].node.startIndex;\n const end = match.captures[0].node.endIndex;\n\n if (start < lastEnd) {\n return; // avoid duplicate matches for the same text\n }\n if (start > lastEnd) {\n adjusted += code.substring(lastEnd, start);\n }\n adjusted += `<span class=\"${name}\">${text}</span>`;\n lastEnd = end;\n});\n\nif (lastEnd < code.length) {\n adjusted += code.substring(lastEnd);\n}\n\nel.innerHTML = adjusted;\n
Now what we need is provide the relevant CSS for those span classes, such as:
\n\n.language-MyLanguage .variable { color: #cc6666; }\n.language-MyLanguage .function { color: #81a2be; }\n.language-MyLanguage .type { color: #f0c674; }\n//...\n
And we’re good to go.
\n\nSee below for the full script. It’s a very simple syntax highlighter and surely has some issues, but it’s working fine for my purposes so far.
\n\nwindow.onload = function() { highlight(); };\n\nfunction htmlDecode(input) {\n var doc = new DOMParser().parseFromString(input, \"text/html\");\n return doc.documentElement.textContent;\n}\n\nfunction highlight() {\n const Parser = window.TreeSitter;\n (async () => {\n await Parser.init();\n const parser = new Parser();\n const MyLanguage = \n await Parser.Language.load('/assets/js/tree-sitter-MyLanguage.wasm');\n parser.setLanguage(MyLanguage);\n\n let response = \n await fetch('/assets/js/tree-sitter-MyLanguage/highlights.scm');\n let highlights = await response.text();\n const query = MyLanguage.query(highlights);\n\n const codeBlocks = document.querySelectorAll('.language-MyLanguage');\n \n codeBlocks.forEach((el) => {\n const code = htmlDecode(el.innerHTML);\n const tree = parser.parse(code);\n\n var adjusted = \"\";\n var lastEnd = 0;\n\n query.matches(tree.rootNode).forEach((match) => {\n const name = match.captures[0].name;\n const text = match.captures[0].node.text;\n const start = match.captures[0].node.startIndex;\n const end = match.captures[0].node.endIndex;\n\n if (start < lastEnd) {\n return;\n }\n if (start > lastEnd) {\n adjusted += code.substring(lastEnd, start);\n }\n adjusted += `<span class=\"${name}\">${text}</span>`;\n lastEnd = end;\n });\n\n if (lastEnd < code.length) {\n adjusted += code.substring(lastEnd);\n }\n\n el.innerHTML = adjusted;\n });\n })();\n}\n
Quite a challenge in itself! The documentation, various grammars for other languages, and tree-sitter playground
help a lot though. ↩
Suppose you have a Swift library which uses CoreData and you’d like to use that in a command line tool for something under your control, such as running it on CI or distributing it to colleagues. You could go the route of creating a “Command Line Tool” project in Xcode, which definitely works, but distributing and signing the resulting executable can be a pain. Thanks to recent advances of the Swift Package Manager to support resources and build executables, there seems to be a simpler choice. Spoiler alert: There are hurdles.
\n\nAssuming we have:
\n\nWhat we want is being able to distribute it by cloning the git repo and running swift run MyCLI
. Ideally we also want to be able to run swift build -c release
, keeping the resulting executable around and executing that later on.
As we’re developing the CLI, we run it in Xcode and everything works and we’re ready to distribute. So we think we’re done and ready to call it a day. Let’s confirm locally that it works from the command line. We run swift run MyCLI
and surely we’re good, right? … Right?
No, unfortunately not. While it works in Xcode, and building using swift run
2 works, our CLI breaks when it first initialises CoreData as it can’t find the CoreData model. Stepping into the code, we see that Bundle.module.url(forResource: \"MyModel\", withExtension: \"momd\")
returns nil
.
What’s going wrong? From the SPM documentation, it sounds like we’re in the clear:
\n\n\n\n\nBy default, the Swift Package Manager handles common resources types for Apple platforms automatically. For example, you don’t need to declare XIB files, storyboards, Core Data file types, and asset catalogs as resources in your package manifest.
\n
Indeed, and it does work when running the CLI from Xcode. The issue is that swift build
does not support Xcode-specific resource types after all. SPM “handles” them, but only in a way that Xcode can roll with them, but not in a way that they are usable out-of-the-box in a CLI executable.
What do we do? We need another way of passing along the CoreData model.
\n\nA simple way is to copy the CoreData model from package A to package B, and add a new parameter to the CLI which then passes the path of that model to the CLI and then uses that to initialise the CoreData stack:
\n\n$ swift run MyCLI --model Model.xcdatamodeld
But that doesn’t work. The issue here is that we have a xcdatamodeld
file, while you initialise a NSManagedObjectModel
in code from a momd
file. Xcode handles that conversion, but we don’t have access to that magic when using swift build
.
We first need to compile that xcdatamodeld
file to a momd
and then ship that along with the CLI. This is done using the momc
executable that’s part of Xcode:
$ /Applications/Xcode.app/Contents/Developer/usr/bin/momc \\\n $(pwd)/Model.xcdatamodeld \\\n $(pwd)/Model.momd
With that ready, we can run:
\n\n$ swift run MyCLI --model Model.momd
That works, but it has the downside of cluttering up how the CLI is used and we can’t just build it as a single executable but need to worry about taking the model to wherever we want to go.
\n\nCan we do better and somehow ship the model as part of our executable? NSManagedObjectModel
conforms to NSCoding
, so we can convert it to some data, encode that using Base64, stick the resulting string into our CLI, and then rebuild the model at runtime from that. Yes, it’s ugly, but it works!
The code to do this is straightforward:
\n\nlet model = NSManagedObjectModel(contentsOf: url)!\nlet encoder = NSKeyedArchiver(requiringSecureCoding: true)\nmodel.encode(with: encoder)\nprint(encoder.encodedData.base64EncodedString())\n
Take the output, add it somewhere in the code, and assign it to a string, and we can then do the inverse:
\n\nlet data = Data(base64Encoded: encodedModelData)!\nlet decoder = try NSKeyedUnarchiver(forReadingFrom: data)\nlet model = NSManagedObjectModel(coder: decoder)!\nlet coordinator = NSPersistentContainer(name: \"Model\", managedObjectModel: model)\n
No, wait, it doesn’t actually work. Yes, it creates the model just fine, but trying to instatiate a coordinator fails. It throws a run-time exception due to “Model has a reference to (null) (0x0)”. And, indeed, if we check the managedObjectModel
property of anything in model.entities
, we see that they are all nil
. We can’t just assign to that property either, as it’s read-only. But there’s another constructor which let’s you merge models, and that, hooray, does the trick:
let data = Data(base64Encoded: encodedModelData)!\nlet decoder = try NSKeyedUnarchiver(forReadingFrom: data)\nlet decoded = NSManagedObjectModel(coder: decoder)!\nlet model = NSManagedObjectModel(byMerging: [decoded])!\nlet coordinator = NSPersistentContainer(name: \"Model\", managedObjectModel: model)\n
Now, we can finally run swift run MyCLI
.
So, we managed to build an executable using SPM that uses CoreData. We can use this in various places, such as, in my case, within a GitHub Action. I really like how Swift Package Manager is becoming a powerful addition to the Swift toolset, that let’s you take your code beyond iOS and macOS apps. Arguably CoreData isn’t a great fit for a command line tool due to the necessary workarounds and being limited to macOS, but it’s great that it can be used this way, too.
\n\nWe ended up duplicating the model (alas Base64 encoded) in the code base of the CLI, which isn’t great. Ideally we add this logic the library itself, so that it can be used out-of-the-box, and have it as some kind of automatic build step so that it’s automatically updated whenever the CoreData model itself is changed.
\n\nIn one short link post John Gruber indirectly makes an excellent case for breaking up tech giants:
\n\n\n\n\nReally good quarter for Nintendo — converting yen to USD, about $4 billion in revenue, $1.4 billion in profit. That’s great for them, but peanuts by U.S. tech giant standards. Fascinating how outsized Nintendo’s influence is on both the gaming industry and pop culture at large compared to their financial size.
\n
Yuo can flip the last sentence the other way around: Isn’t Nintendo’s success, influence and financial success a great indicator that this is an excellent size for a tech company? What similar heart-warming tech companies didn’t make it this far because they got crushed by the giants?
\n", "date_published": "2020-11-06T06:28:00+11:00", "date_modified": "2020-11-06T06:28:00+11:00" }, { "id": "https://adrian.schoenig.me/blog/2020/10/03/essence-of-an-app/", "url": "https://adrian.schoenig.me/blog/2020/10/03/essence-of-an-app/", "title": "The essence of an app", "content_html": "Horace Dediu has a way of putting things into an insightful perspective that makes you look at things differently. In the latest Critical Path episode, he points out that what people enjoyed about the iPod (and the Walkman before) wasn’t just that it let’s you take your music with you, but that it gives you privacy in a public space. Similarly, the product of a gym is not just exercise, but delivering a feeling of guilt. It’s a way of looking at a thing beyond it’s features and immediate use case.
\n\nLook at the current success of Widgetsmith. Apple touted widgets as a way to get glanceable information on your home screen, but that’s not why they took off - they took off because to many people their phone’s home screen is their virtual home and letting them decorate it their way means something to them.
\n\nOn a much smaller scale, when I released my album-focussed music player Longplay, I received a good amount of feedback and praise. Interestingly, a theme emerged after a while from people expressing that they love the app not because of any specific features but because it let’s them reconnect with their album library in a way that reminded them of their old vinyl or CD collections. It’s a wall of their favourite albums that has been with them for many years or decades. It’s something personal.
\n\nWhile I developed and used the app myself, I had a vague sense of that, but soaking in that feedback from users and getting those different perspectives, revealed the the “heart” or the “essence” of the app. That in turn helps digest and prioritise other feedback, suggestions and wishes. When you have an understanding of what that essence of your app is, it becomes much clearer what to say “no” and what to say “hell yeah” to.
\n\nWhat I’m taking away from this is that it’s important to look beyond the features, and try to get a feeling of what’s underneath and what’s the defining principle. It’s hard to find that yourself and you might need quite a bit of user feedback to get to that. But if you find it, it can be inherently rewarding, and might reveal aspects of directions to take your app that you did not consider before.
\n", "date_published": "2020-10-03T17:12:00+10:00", "date_modified": "2020-10-03T17:12:00+10:00" }, { "id": "https://adrian.schoenig.me/blog/2020/08/18/introducing-longplay/", "url": "https://adrian.schoenig.me/blog/2020/08/18/introducing-longplay/", "title": "Introducing Longplay", "content_html": "Longplay, my first self-published iOS app, is now available on the App Store. I’m super excited.
\n\n\n\nLongplay is a music player for anyone who enjoy listening to entire albums start-to-finish. It digs through your Apple Music or iTunes library 1 – that might have grown over the years or decades and is full of a mix of individual songs, partial albums, complete albums and playlists – to identify just those complete albums and gives you quick access to play them.
\n\nIt provides a beautiful view of all your album artwork, and let’s you explore your albums (or playlists) by various sort options. A unique one is Negligence which combines how highly you’ve ranked an album and when you last listened it, to let you rediscover forgotten favourites. Brightness sorts the albums by their primary colour for an interesting visual take on your albums collection.
\n\nYou can hide albums or playlists that you don’t want to show up - useful for meditation or kids albums, or smart playlists that you use for doing house keeping.
\n\nFor users who want to listen on specific AirPlay devices, such as multi-room audio systems or headphones, there’s a “Play on” feature that’s the quickest way to listen on the right device.
\n\nIt’s fully VoiceOver accessible, too. It works well on iPhone and iPad – including splitscreen support and a delightful cursor effect.
\n\nIf that sounds intriguing to you, head over to the App Store. It’s USD 2.99, EUR 3.49, AUD 4.49, or equivalent in your currency.
\n\nI started building the app under the name Albums back in mid 2015 for iOS 8.3 to experiment with the nascent Swift programming language and to scratch my own itch of being frustrated with finding my complete albums in the iOS Music app. I had a smart playlist for it in iTunes on my Mac, but that didn’t translate to iOS. A short while later I came up with the name Longplay to give it some more character and made a first attempt to get the app onto the App Store. However, due to the screenshots all using artworks of my music album, that was a no-go and I didn’t pursue that avenue further, though I did keep using the app myself since then.
\n\nIn 2016 I started using Spotify and added the capability to Longplay to work both with an iTunes music library. While I got that working, I also encounteered a bunch of headaches with the closed-source Spotify SDK. Those meant I kept using the app just byself and had a couple of friends using it, but a public release would have meant addressing all those Spotify edge cases - not great for a side project.
\n\nEnd of 2017 I did a minor update to Longplay to accommodate the latest Swift language features such as Codable.
\n\n2018 and 2019 went by with me using the app using a lot but no code changes. During that time I started using a Sonos system at home and listening to my music on that. When AirPlay 2 came out and Sonos supported it but Spotify didn’t, I switched away again from Spotify to Apple Music.
\n\nEarly 2020 I decided that now that I’m not using Spotify myself anymore, I could release Longplay without the (headaches of the) Spotify integration. So I spent quite some time modernising the code base, squashing bugs, polishing and the app, and prepared for the public release.
\n\nGetting the app itself ready for release was quite some work2. And then there was the challenge of getting it approved for an App Store release which went through several rejections and took one-and-a-half months. Alas, here we are! I’m really thankful for all the support, testing and feedback from my family and friends during development, the friendly folks over at the CoreInt Slack for emotional support and helping me collect the necessary fake and openly licensed artwork, and all my beta testers for their feedback.
\n\nDeveloping Longplay is a blast – working on a music player is something special for me. I love listening to music. It always lifts my mood – in particular when I’m not aware that I needed to have my mood lifted. And whenever I work on Longplay I just have to listen to all my favourite music.
\n\nI’m curious how the app will be received, and I very much I look forward to keep supporting the app now that it’s out in the wild!
\n\nA 1.0.1 update is imminent which aligns the play/pause button logic to mimick the Music app, and also adds to the contextual preview some information on the sorting: “Addiction” will show you how many times you’ve listened to an album, “Recency” will show when you added it to your library, and “Stars” will show your average rating for the songs on that album.
\n\nFor the upcoming iOS 14 release, I’m exploring a home screen widget to show the top albums/playlists by a sort order of your choice.
\n\nUpdate: All these and more have been added and released in the mean-time, see here.
\n\nThere’s some more info on the app on its dedicated page and in the Press Kit. Any questions, ping me on micro.blog, by tweet or by mail. And, of course, I’d be thrilled if you get the app and I hope you’ll enjoy it.
No, Spotify is not supported. Read on for more. ↩
\nA few notable things from going through the git log: I switched from CocoaPods to Swift Package Manager and use diffable data sources, removed Spotify, made the settings discoverable and added VoiceOver support for accessing them, added contextual actions including quick access to AirPlay controls, added haptics, fixed a whole tons of smaller bugs and performance issues, indicate status of calculating the per-album brightness information, and added the little thank you screen. ↩
\nWhen you’re creating a Vapor app, you might at some stage want to load some resources that you process in your code, e.g., some static data in JSON format that you want to augment.
\n\nI couldn’t find documentation for it and getting it to work required me to do a detour past the helpful folks in the Vapor discord group, so here’s a quick how to. Note, that this is for Vapor 4.
\n\nYou should put the files into a folder at the root of your Vapor app, e.g., into Resources
. Don’t put them into the your Sources
folder.
In your code, when you configure your Application
instance, you can get the path and trigger loading of the data. You could for example add the following to your Sources/App/configure.app
:
public func configure(_ app: Application) throws {\n // Existing set-up code\n \n let path = app.directory.workingDirectory + \"Resources/\"\n StaticDataManager.shared.load(directory: path)\n}\n
This should then work when running Vapor from the command line.
\n\nYou’ll need to do more setup when running using Docker or Xcode:
\n\nDockerfile
add a line like COPY --from=build /build/Resources /run/Resources
, this makes sure the resources will be copied across when packaging up your app.