So if you’re wearing a Vision Pro and you see someone else wearing a Vision Pro, you just see the headset, right? Somebody at Apple is furiously working on recognizing other headsets and replacing them with Personas.
Our first Mac was a purple gumdrop G3 iMac DV with MacOS 9. We were in grad school at the time and my wife used it to write her thesis in Word. The iMac was a fantastic little machine, but Word would occasionally center her entire document so there was cursing and crying. I don’t think we ever had anything fancy enough to plug into those FireWire ports. #MyFirstMac
It’s the kind of day where you have to disable System Integrity Protection. I’m sure this’ll all turn out fine.
Somebody nicked our credit card number and Bank of America sent new cards. Cool. Then I have to manually update Apple Wallet on like 6 different devices… not as cool.
TIL: AVCaptureDevice documentation says the ‘uniqueID’ is ‘a unique identifier that persists on one system across device connections and disconnections, application restarts, and reboots of the system itself’. Unless you plug that webcam into a different USB port. Then the Mac is all ‘Woah! WTF is this thing?!’
I have a project that failed to build under Xcode 15. It links a 3rd party framework written in C++ but the link step fails with unresolved symbols. This classic mode flag for the linker fixed it right up.
Python in Excel! But you have to write your python in the little Excel formula box? Like getting a Ferrari but having to drive it with an app on your phone.
It’d be cool if the Apple Card (like the physical card) worked with Find My, so my wallet doesn’t have to have a dog tag.
That thing where the list of states on a web form is alphabetized by state name, but display text is the two letter abbreviation… so the list goes NV, NE, NH, NJ, NM, NY, NC, ND. Who looks at that and is like, yeah, we got it!
Am I reading this right?
Disney+ and Hulu are getting big price increases, $3 a month for both services on the ad-free tier. So if you subscribe to both, it currently costs $26 and that’s getting bumped to $32. BUT they’re adding a bundle of ad-free Hulu and Disney+ for $20. So if you subscribe to both (like me!), this is actually going to save $6?
Office 365
I’ve been moving SonicBunny Software from Google Workspace to Office 365 over the last few weeks. I like 365 and it offers a lot of features, but the time it takes to make a change is a bit of a shock. Configuring Exchange and so many things take 5 or 10 minutes to kick in. I guess Google is similar, but with a small installation, it seems like Google was nearly instant where O365 really does take several minutes. But I just hit the kicker – ‘Please allow 24 to 48 hours for this to take effect.’ Yowza!
First hard drive, 10s of megabytes. Latest hard drive, 10s of terabytes. My brain can’t really fathom how much more that is.
A side dish of “cucumber salad” is fine, but order a “bowl of pickles” and people look at you funny.
HVAC died today on (almost) the hottest day of the year, 8 days after it’s annual service. Luckily, the AC peeps were out in about an hour! Turned out to be a loose wire in an electrical box — easy fix. 🥵
iPhone rerouted us over the weekend to avoid “severe weather”. First time I’ve seen that. It helped but we still nearly floated away.
Can you guess the TV show from a random person in the opening credits?
Only scored 9 out of 10. It’s the black and white western that got me.
Can you guess the TV show from a random person in the opening credits?:
Prove you recognize these total strangers from TV.
We’ve been using UniFi Talk for a while for our home phone. It’s kinda… basic. But they got SMS support a while ago! Yay! But the SMS to email relay has a delay of usually 10 minutes. Boo! Not great when the repair guy texts that he’s 10 minutes out and you get the text 10 minutes later. But, turns out you can also be notified via Slack webhook, which happens immediately. Now we get instant delivery of SMS messages to our home number and only have to go through like three different systems. Yay?
Callisto, Jupyter and Mac Optimized Machine Learning – Part 2
In my last post, I looked at how to install TensorFlow optimized for Apple Silicon. This time around, I’ll explore Apple Silicon support in PyTorch, another wildly popular library for machine learning.
Setting up Callisto for PyTorch is easy! The suggested pip
command is
pip install torch torchvision torchaudio
And we can do that directly in the Callisto package manager. Remember, you can install multiple packages at a time by adding a space separated list, so paste torch torchvision torchaudio
into the install field and away we go!
I was looking for little example to run and compare performance of PyTorch on the Apple Silicon CPU with performance on the GPU. To be quite honest, it was difficult to find a straightforward example. Fortunately, I ran across this notebook by Daniel Bourke. Daniel works through an example training a model on both the CPU device and the MPS device. MPS is the Metal Performance Shaders backend which uses Apple’s Metal framework to harness the power of the M1’s graphics hardware. In this example, he creates a Convolutional Neural Network (CNN) for image classification and compares the performance of the CPU and MPS backends.
The bottom line? MPS is at least 10x faster than using the CPU. In Daniel’s posted notebook, he saw a speed up of around 10.6. On my machine, I saw a performance increase of about 11.1x. The best thing about optimization in PyTorch is that it doesn’t require any extra work. For Mac, the MPS backend is the default so everyone benefits from the performance boost.
In addition to TensorFlow and PyTorch, I checked some other popular Python ML libraries and to see how they took advantage of Apple Silicon. While some libraries have choosen not to pursue Apple Silicon specific optimization, all of them run correctly in CPU mode.
- Keras
- Built on TensorFlow, Keras should show significant performance improvements when you use an optimized version of TensorFlow
- FastAI
- Built on PyTorch, fastai should show significant performance improvements when you use an optimized version of PyTorch
- Scikit-learn
- To avoid the management overhead and complexity, scikit-learn doesn’t support GPU acceleration
- Numpy
- It maybe be possible to improve performance in numpy by compiling it against an optimized BLAS library which uses Apple’s Accelerate framework. The Accelerate framework provides high performance, vector optimized mathematical functions which are tuned for Apple Silicon. This is a bit involved and will require more research to see what impact this can have.
- XGBoost
- XGBoost seems to be focused on GPUs that support CUDA for hardware acceleration and currently have no plans to support Apple Silicon.
- Numba
- Numba also seems to focus only on CUDA based GPU acceleration