Tag: technews

  • Cook ‘incredibly excited’ about generative AI coming to Apple gear later in 2024

    Cook ‘incredibly excited’ about generative AI coming to Apple gear later in 2024

    Tim Cook says that Apple is spending “a tremendous amount of time and effort” on AI features that will be announced in the coming months.

    A Siri icon superimposed on Apple Park
    A Siri icon superimposed on Apple Park

    Apple’s Cook took the opportunity of the firm’s latest financial earnings call to enthuse about the Apple Vision Pro and the future of AI. “We are announcing these results on the eve of what is sure to be an historic day as we enter the era of spatial computing,” he said. “Moments like these are what we live for at Apple, they’re why we do what we do.”

    He said that this is “why we’re so unflinchingly dedicated to groundbreaking innovation,” and also “why we’re so focused on pushing technology to its limits as we work to enrich the lives of our users.”

    Continue Reading on AppleInsider | Discuss on our Forums

    Go Here to Read this Fast!

    Cook ‘incredibly excited’ about generative AI coming to Apple gear later in 2024

    Originally appeared here:

    Cook ‘incredibly excited’ about generative AI coming to Apple gear later in 2024

  • Apple R&D spending flat for the first time in over a decade — sort of

    Apple R&D spending flat for the first time in over a decade — sort of

    Research and development spending has increased consistently for over a decade at Apple, but it’s basically flat year over year for Q1 2024.

    An image of Apple's large circular building called Apple Park
    Apple Park

    Apple spends incredible amounts of money on research and development each quarter. According to Apple’s Q1 2024 earnings report, it spent $7.7 billion in the space.

    When viewed each quarter, Apple has increased its spending on R&D year-over-year every quarter since at least 2013. For Q1 2024, it was nearly flat for the first time.

    Continue Reading on AppleInsider | Discuss on our Forums

    Go Here to Read this Fast!

    Apple R&D spending flat for the first time in over a decade — sort of

    Originally appeared here:

    Apple R&D spending flat for the first time in over a decade — sort of

  • If you’re in the EU, you can ask Apple about App Store changes

    If you’re in the EU, you can ask Apple about App Store changes

    As Apple prepares to alter its App Store regulations to comply with the European Union’s new Digital Markets Act, the company is now offering consultations to help developers better understand their options.

    A blue gradient background with a white Apple App Store logo in the center, inside a rounded square icon.
    Apple App Store icon

    Apple announced in January that it would allow EU developers to sell their apps through other app stores. Although this was seen as a victory by some, it’s not immediately obvious which developers should continue using the official App Store or branch out on their own.

    To assist developers in making decisions, Apple recommends requesting a 30-minute online consultation session with their team. This will provide an opportunity to gain clarity, ask questions, and give feedback on proposed changes. Requests can be made through Apple’s developer page.

    Continue Reading on AppleInsider | Discuss on our Forums

    Go Here to Read this Fast!

    If you’re in the EU, you can ask Apple about App Store changes

    Originally appeared here:

    If you’re in the EU, you can ask Apple about App Store changes

  • Apple won’t license Masimo’s patents despite Apple Watch import ban

    Apple won’t license Masimo’s patents despite Apple Watch import ban

    Apple CEO Tim Cook says there isn’t any intention to license Masimo’s blood oxygen detection to end the Apple Watch import ban.

    Apple Watch Series 9 with a blue strap overturned on a black surface
    Apple Watch Series 9

    Masimo has been embroiled in a patent lawsuit that most recently resulted in an import ban for Apple Watches with a blood oxygen sensor. Apple opted to disable the sensor to continue sales rather than take any other course of action.

    Apple CEO Tim Cook told CNBC in a statement shared on live television that Apple has no intention to license Masimo’s patents. While it seemed likely that was the case, the company hadn’t said as much publicly until now.

    Continue Reading on AppleInsider | Discuss on our Forums

    Go Here to Read this Fast!

    Apple won’t license Masimo’s patents despite Apple Watch import ban

    Originally appeared here:

    Apple won’t license Masimo’s patents despite Apple Watch import ban

  • iPhone sales are up, but take a big hit in China

    iPhone sales are up, but take a big hit in China

    Apple’s iPhone sales worldwide are up, but in China are down 13% year over year, and CEO Tim Cook argues that it’s a reflection of overall economic issues rather the iPhone losing popularity.

    Apple's iPhone 15 Pro range
    Apple’s iPhone 15 Pro range

    During Apple’s Q1 2024 earnings announcement, Apple has reported that its total iPhone sales in China were $20.82 billion. It’s a drop of 13% compared to the same period in 2023.

    “The dollar is very strong versus the RMB,” Tim Cook told CNBC. “And so that negative 13 goes to a mid-single digit number.”

    Continue Reading on AppleInsider | Discuss on our Forums

    Go Here to Read this Fast! iPhone sales are up, but take a big hit in China

    Originally appeared here:
    iPhone sales are up, but take a big hit in China

  • What Apple’s jailbroken iPhone kits for security researchers looks like

    What Apple’s jailbroken iPhone kits for security researchers looks like

    Pictures have surfaced of one of Apple’s Security Research Device kits, which at present consists of a specially-built variant of the iPhone 14 Pro.

    The iPhone 14 Pro
    The iPhone 14 Pro

    Apple launched the iPhone Security Research Device Program in 2019. The program has reportedly discovered 130 high-profile security-critical vulnerabilities as of 2023.

    Those who apply to to the Security Research Device Program (SRDP) receive a Security Research Device — which is essentially a jailbroken iPhone.

    Continue Reading on AppleInsider | Discuss on our Forums

    Go Here to Read this Fast!

    What Apple’s jailbroken iPhone kits for security researchers looks like

    Originally appeared here:

    What Apple’s jailbroken iPhone kits for security researchers looks like

  • Apple’s $119B Q1 2024 revenue a bounce back from 2023 dip

    Apple’s $119B Q1 2024 revenue a bounce back from 2023 dip

    Apple reported revenue of $119.58 billion in its first-quarter results for the 2024 fiscal year, with earnings rebounding from the year-ago quarter.

    Apple CEO Tim Cook
    Apple CEO Tim Cook

    The first quarter of the 2024 fiscal calendar, and one typically assisted by holiday sales, Apple issued its financial results on Thursday, ahead of the usual conference call that CEO Tim Cook and CFO Luca Maestri has with industry analysts.

    For the first quarter, Apple achieved $119.58 billion in revenue, up from the $117.15 billion reported for Q1 2023. The earnings per share of $2.18 is up from the $1.88 from the year-ago quarter.

    Continue Reading on AppleInsider | Discuss on our Forums

    Go Here to Read this Fast!

    Apple’s $119B Q1 2024 revenue a bounce back from 2023 dip

    Originally appeared here:

    Apple’s $119B Q1 2024 revenue a bounce back from 2023 dip

  • Meta’s Reality labs had its best quarter, but still lost $4 billion

    Karissa Bell

    Reality Labs, Meta’s division for AR, VR and the metaverse, just had its best quarter yet despite continuing its multibillion-dollar losing streak. Reality Labs generated more than $1 billion in revenue during the final quarter of 2023 thanks to its Quest headsets and the Ray-Ban Meta smart glasses.

    While crossing $1 billion in revenue is a new milestone for the company’s metaverse group, it’s still expected to continue racking up massive losses for the foreseeable future. Reality Labs lost $4.6 billion in the quarter, and more than $16 billion in 2023. Meta CFO Susan Li said that these losses are expected to “increase meaningfully year-over-year due to our ongoing product development efforts in augmented reality/virtual reality and our investments to further scale our ecosystem.”

    The fourth-quarter, which encompasses the holiday shopping season, has typically been when reality does the best. During a call with analysts, Mark Zuckerberg suggested that the company’s smart glasses had done particularly well, saying that Ray-Ban maker EssilorLuxottica was “planning on making more [smart glasses] than we’d both expected due to high demand.” He added that both Quest 2 and Quest 3 were “performing well,” calling Quest 3 the “most popular mixed reality device.”

    Reality Labs aside, Meta had a strong quarter, reporting $40.1 billion to close out 2023, bringing its total revenue for the year to just under $135 billion. Facebook’s user base also grew to 2.1 billion daily active users (DAUs). Meta CFO Susan Li said that the company was “transitioning away” from sharing the metric and would no longer report on Facebook’s daily or monthly active users or its “family monthly active people.”

    The company had shared that it would eventually stop reporting user numbers back in 2019 as Facebook’s growth began to slow. But the change shows how Facebook’s position in the company’s “family of apps” has changed in recent years. A report from Pew Research earlier this week found that Instagram is continuing to grow in the US while Facebook use remains flat.

    Meta’s newest app, Threads, is still growing, however. Zuckerberg said the service has 130 million monthly users, up from “just under” 100 million last fall. “Threads now has more people actively using it today than it did during its initial launch peak,” Zuckerberg said, referring to the app’s initial, but short-lived, surge in growth.

    Zuckerberg also talked more about his newly-stated ambition to create artificial general intelligence, or AGI at Meta, saying it would be the “theme” of the company’s product work going forward. “This next generation of services requires building full general intelligence,” he said. “It’s clear that we’re going to need our models to be able to reason, plan, code, remember and many other cognitive abilities in order to provide the best versions of the services that we envision.”

    The Meta CEO also indicated the company would be unlikely to offer any of its apps in alternative app stores in Europe, following Apple’s controversial new developer policies. “The way that they’ve implemented it, I would be very surprised if any developer chose to go into the alternative app stores,” he said. “They’ve made it so onerous, and I think so at odds with the intent of what the EU regulation was, that I think it’s just going to be very difficult for anyone, including ourselves, to really seriously entertain.”

    This article originally appeared on Engadget at https://www.engadget.com/metas-reality-labs-had-its-best-quarter-but-still-lost-4-billion-231135719.html?src=rss

    Go Here to Read this Fast!

    Meta’s Reality labs had its best quarter, but still lost $4 billion

    Originally appeared here:

    Meta’s Reality labs had its best quarter, but still lost $4 billion

  • Apple sold enough iPhones and services last quarter to reverse a downward revenue trend

    Cherlynn Low

    After four consecutive quarters of revenue decline, Apple broke the trend and reported its first period of revenue growth today. In its earnings report for the first quarter of the financial year of 2024, the company announced a quarterly revenue of $119.6 billion, which is an increase of 2 percent from the same period last year. 

    In addition, Apple CEO Tim Cook said its “installed base of active devices has now surpassed 2.2 billion, reaching an all-time high across all products and geographic segments.” This quarter includes money brought in from the sales of the iPhone 15 line introduced in September 2023, which had an obvious impact on performance. 

    “Today Apple is reporting revenue growth for the December quarter fueled by iPhone sales, and an all-time revenue record in Services,” Cook said. He noted the company hitting “all-time revenue records across advertising, Cloud services, payment services and video as well as December quarter records in App Store and Apple Care.” Cook recapped some updates made to the Apple TV app, as well as TV+ content earning nominations and awards. 

    Cook went on to remind us during the company’s earnings call that tomorrow is the launch day for the Vision Pro headset, calling it historic. After saying that Apple is dedicated to investing in new technologies, Cook added that the company will be sharing more about its developments in AI later this year. 

    Products in the wearables, home and accessories categories didn’t fare well in this quarter, though sales in the Mac department did increase year over year. iPad sales in particular dropped 25 percent over the same period last year, though Cook attributed that to a “difficult compare” to the big numbers recorded in the first quarter of 2023 due to new models with refreshed Apple Silicon. Considering the company did not release a new iPad model in 2023 at all, this is not surprising. 

    Cook continued by highlighting developments like Apple opening its 100th retail location in Asia Pacific and updates on its sustainability efforts. He wrapped up by saying “Apple is a company that has never shied away from big challenges,” adding “so we’re optimistic about the future, confident in the long term and as excited as we’ve ever been to deliver for our users.”

    This article originally appeared on Engadget at https://www.engadget.com/apple-sold-enough-iphones-and-services-last-quarter-to-reverse-a-downward-revenue-trend-223109289.html?src=rss

    Go Here to Read this Fast! Apple sold enough iPhones and services last quarter to reverse a downward revenue trend

    Originally appeared here:
    Apple sold enough iPhones and services last quarter to reverse a downward revenue trend

  • AI Predicts Your Insides From Your Outsides With Pseudo-DXA

    Lambert T Leong, PhD

    A Quantitatively Accurate and Clinically Useful Generative Medical Imaging Model

    3D body surface scan point cloud and matching dual energy X-ray absorptiometry (DXA) scan (Image by Author)

    Key Points

    1. To our knowledge, this is the first quantitatively accurate model in which generated medical imaging can be analyzed with commercial clinical software.
    2. Being able to predict interior distributions of fat, muscle, and bone from exterior shape, indicates the strong relationship between body composition and body shape
    3. This model represents a significant step towards accessible health monitoring, producing images that would normally require specialized, expensive equipment, trained technicians, and involve exposure to potentially harmful ionizing radiation.
    4. Read the paper HERE

    Generative artificial intelligence (AI) has become astonishingly popular especially after the release of both diffusion models like DALL-E and large language models (LLM) like ChatGPT. In general, AI models are classified as “generative” when the model produces something as an output. For DALL-E the product output is a high-quality image while for ChatGPT the product or output is highly structured meaningful text. These generative models are different than classification models that output a prediction for one side of a decision boundary such as cancer or no cancer and these are also different from regression models that output numerical predictions such as blood glucose level. Medical imaging and healthcare have benefited from AI in general and several compelling use cases and generative models are constantly being developed. A major barrier to clinical use of generative AI models is a lack of validation of model outputs beyond just image quality assessments. In our work, we evaluate our generative model on both a qualitative and quantitative assessment as a step towards more clinically relevant AI models.

    Quality vs Quantity

    In medical imaging, image quality is crucial; it’s all about how well the image represents the internal structures of the body. The majority of the use cases for medical imaging is predicated on having images of high quality. For instance, X-ray scans use ionizing radiation to produce images of many internal structures of the body and quality is important for identifying bone from soft tissue or organs as well as identifying anomalies like tumors. High quality X-ray images result in easier to identify structures which can translate to more accurate diagnosis. Computer vision research has led to the development of metrics meant to objectively measure image quality. These metrics, which we use in our work, include peak signal to noise ratio (PSNR) and structural similarity index (SSIM), for example. Ultimately, a high-quality image can be defined as having sharp, well defined borders, with good contrast between different anatomical structures.

    Images are highly structured data types and made up of a matrix of pixels of varying intensities. Unlike natural images as seen in the ImageNet dataset consisting of cars, planes, boats, and etc. which have three red, green, and blue color channels, medical images are mostly gray scale or a single channel. Simply put, sharp edges are achieved by having pixels near the borders of structures be uniform and good contrast is achieved when neighboring pixels depicting different structures have a noticeable difference in value from one another. It is important to note that the absolute value of the pixels are not the most important thing for high quality images and it is in fact more dependent on the relative pixel intensities to each other. This, however, is not the case for achieving images with high quantitative accuracy.

    Demonstrating the difference between quality and quantity. Both images look the same and are of good quality but the one on the right gives the right biological measurements of bone, muscle, and fat. (Image by Author)

    A subset of medical imaging modalities is quantitative meaning the pixel values represent a known quantity of some material or tissue. Dual energy X-ray Absorptiometry (DXA) is a well known and common quantitative imaging modality used for measuring body composition. DXA images are acquired using high and low energy X-rays. Then a set of equations sometimes refered to as DXA math is used to compute the contrast and ratios between the high and low energy X-ray images to yield quantities of fat, muscle, and bone. Hence the word quantitative. The absolute value of each pixel is important because it ultimately corresponds to a known quantity of some material. Any small changes in the pixel values, while it may still look of the same or similar quality, will result in noticeably different tissue quantities.

    Example of commercial software that is used clinically to measure body composition. In this example, we are demonstrating the ability to load and analyze our Pseudo-DXA generated image. (Image by Author)

    Generative AI in Medical Imaging

    As previously mentioned, generative AI models for medical imaging are at the forefront of development. Known examples of generative medical models include models for artifact removal from CT images or the production of higher quality CT images from low dose modalities where image quality is known to be lesser in quality. However, prior to our study, generative models creating quantitatively accurate medical images were largely unexplored. Quantitative accuracy is arguably more difficult for generative models to achieve than producing an image of high quality. Anatomical structures not only have to be in the right place, but the pixels representing their location needs to be near perfect as well. When considering the difficulty of achieving quantitative accuracy one must also consider the bit depth of raw medical images. The raw formats of some medical imaging modalities, DXA included, encode information in 12 or 14 bit which is magnitudes more than standard 8-bit images. High bit depths equate to a bigger search space which could equate to it being more difficult to get the exact pixel value. We are able to achieve quantitative accuracy through self-supervised learning methods with a custom physics or DXA informed loss function described in this work here. Stay tuned for a deep dive into that work to come in the near future.

    What We Did

    We developed a model that can predict your insides from your outsides. In other words, our model innovatively predicts internal body composition from external body scans, specifically transforming three-dimensional (3D) body surface scans into fully analyzable DXA scans. Utilizing increasingly common 3D body scanning technologies, which employ optical cameras or lasers, our model bypasses the need for ionizing radiation. 3D scanning enables accurate capture of one’s exterior body shape and the technology has several health relevant use cases. Our model outputs a fully analyzable DXA scan which means that existing commercial software can be used to derive body composition or measures of adipose tissue (fat), lean tissue (muscle), and bone. To ensure accurate body composition measurements, our model was designed to achieve both qualitative and quantitative precision, a capability we have successfully demonstrated.

    Inspiration and Motivation

    The genesis of this project was motivated by the hypothesis that your body shape or exterior phenotype is determined by the underlying distribution of fat, muscle, and bone. We had previously conducted several studies demonstrating the associations of body shape to measured quantities of muscle, fat, and bone as well as to health outcomes such as metabolic syndrome. Using principal components analysis (PCA), through shape and appearance modeling, and linear regression, a student in our lab showed the ability to predict body composition images from 3D body scans. While this was impressive and further strengthened the notion of the relationship between shape and composition, these predicted images excluded the forelimbs (elbow to hand and knee to feet) and the images were not in a format (raw DXA format) which enabled analysis with clinical software. Our work fully extends and overcomes previous limitations. The Pseudo-DXA model, as we call it, is able to generate the full whole body DXA image from 3D body scan inputs which can be analyzed from using clinical and commercial software.

    Very early proof-of-concept 3D to DXA image translation which sparked this whole project. (Image by Author)

    Our Training Data

    The cornerstone of the Pseudo-DXA model’s development was a unique dataset comprising paired 3D body and DXA scans, obtained simultaneously. Such paired datasets are uncommon, due to the logistical and financial challenges in scanning large patient groups with both modalities. We worked with a modest but significant sample size: several hundred paired scans. To overcome the data scarcity issue, we utilized an additional, extensive DXA dataset with over 20,000 scans for model pretraining.

    Building the Model

    The Pseudo-DXA model was built in two steps. The first self-supervised learning (SSL) or pretraining step involved training a variational auto encoder (VAE) to encode and decode or regenerate raw DXA scan. A large DXA data set, which is independent of the data set used in the final model and evaluation of our model, was used to SSL pretrain our model and it was divided to contain an separate hold out test set. Once the VAE model was able to accurately regenerate the original raw DXA image as validated with the holdout test set, we moved to the second phase of training.

    In brief, VAE models consist of two main subnetwork components which include the encoder and the decoder, also known as a generator. The encoder is tasked with taking the high dimensional raw DXA image data and learning a meaningful compressed representation which is encoded into what is known as a latent space. The decoder or generator takes the latent space representation and learns to regenerate the original image from the compressed representation. We used the trained generator from our SSL DXA training as the base of our final Pseudo-DXA model.

    Model architecture diagram with the first self-supervised learning phase at the top and the Pseudo-DXA training phase at the bottom. (Image by Author)

    The structure of the 3D body scan data consisted of a series of vertices or points and faces which indicate which points are connected to one another. We used a model architecture resembling the Pointnet++ model which has demonstrated the ability to handle point cloud data well. The Pointnet++ model was then attached to the generator we had previously trained. We then fed the mode the 3D data and it was tasked with learning generate the corresponding DXA scan.

    Pseudo-DXA Results

    In alignment with machine learning best practices, we divided our data such that we had an unseen holdout test for which we reported all our results on.

    Image quality

    We first evaluated our Pseudo-DXA images using image quality metrics which include normalized mean absolute error (NMAE), peak signal to noise ratio (PSNR), and structural similarity index (SSIM). Our model generated images had mean NMAE, PSNR, and SSIM of 0.15, 38.15, and 0.97, respectively, which is considered to be good with respect to quality. Shown below is an example of a 3D scan, the actual DXA low energy scan, Pseudo-DXA low energy scan and the percent error map of the two DXA scans. As mentioned DXA images have two image channels for high and low energies yet, these examples are just showing the low energy image. Long story short, the Pseudo-DXA model can generate high quality images on par with other medical imaging models with respect to the image quality metrics used.

    3D scan from the test set, their actual DXA scan, the Pseudo-DXA scan, and error map comparing the actual to the Pseudo-DXA. (Image by Author)

    Quantitative Accuracy

    When we analyzed our Pseudo-DXA images for composition and compare the quantities to the actual quantities we achieved coefficients of determination (R²) of 0.72, 0.90, 0.74, and 0.99 for fat, lean, bone, and total mass, respectively. An R²of 1 is desired and our values were reasonably close considering the difficulty of the task. A comment we encountered when presenting our preliminary findings at conferences was “wouldn’t it be easier to simply train a model to predict each measured composition value from the 3D scan so the model would for example, output a quantity of fat and bone and etc., rather than a whole image”. The short answer to the question is yes, however, that model would not be as powerful and useful as the Pseudo-DXA model that we are presenting here. Predicting a whole image demonstrates the strong relationship between shape and composition. Additionally, having a whole image allows for secondary analysis without having to retrain a model. We demonstrate the power of this by performing ad-hoc body composition analysis on two user defined leg subregions. If we had trained a model to just output scalar composition values and not an image, we would only be able to analysis these ad-hoc user defined regions by retraining a whole new model for these measures.

    Example of secondary analysis with user defined subregions of the leg labeled R1 and R2. (Image by Author)

    Long story short, the Pseudo-DXA model produced high quality images that were quantitatively accurate, from which software could measure real amounts of fat, muscle, and bone.

    So What Does This All Mean?

    The Pseudo-DXA model marks a pivotal step towards a new standard of striving for quantitative accuracy when necessary. The bar for good generative medical imaging models was high image quality yet, as we discussed, good quality may simply not be enough given the task. If the clinical task or outcome requires something to be measured from the image beyond morphology or anthropometry, then quantitative accuracy should be assessed.

    Our Pseudo-DXA model is also a step in the direction of making health assessment more accessible. 3D scanning is now in phones and does not expose individuals to harmful ionizing radiation. In theory, one could get a 3D scan of themselves, run in through our models, and receive a DXA image from which they can obtain quantities of body composition. We acknowledge that our model generates statistically likely images and it is not able to predict pathologies such as tumors, fractures, or implants, which are statistically unlikely in the context of a healthy population from which this model was built. Our model also demonstrated great test-retest precision which means it has the ability to monitor change over time. So, individuals can scan themselves every day without the risk of radiation and the model is robust enough to show changes in composition, if any.

    We invite you to engage with this groundbreaking technology and/or provided an example of a quantitatively accurate generative medical imaging model. Share your thoughts, ask questions, or discuss potential applications in the comments. Your insights are valuable to us as we continue to innovate in the field of medical imaging and AI. Join the conversation and be part of this exciting journey!

    More Resources

    Read The Paper

    Generative deep learning furthers the understanding of local distributions of fat and muscle on body shape and health using 3D surface scans – Communications Medicine

    Model and Data Request


    AI Predicts Your Insides From Your Outsides With Pseudo-DXA was originally published in Towards Data Science on Medium, where people are continuing the conversation by highlighting and responding to this story.

    Originally appeared here:
    AI Predicts Your Insides From Your Outsides With Pseudo-DXA

    Go Here to Read this Fast! AI Predicts Your Insides From Your Outsides With Pseudo-DXA