ICASA will convene public hearings later this month on the Draft Sports Broadcasting Services Amendment Regulations that were published on 14 December 2018.
The regulator has received 39 written submissions from industry players on the draft regulations, 28 of which have confirmed their availability to present before the council.
“ICASA has a clear mandate of regulating in the public interest. Therefore, the draft regulations seek to reiterate and ensure that South Africans have access to a wide range of national sporting events and further reflect and give exposure to minority and developmental sport,” says ICASA councillor, Palesa Kadi.
The public hearings will be held at African Pride Irene Country Club in Centurion from 27-31 May and ICASA noted the participation in the hearings process with appreciation.
“It is really encouraging to see such interest in our regulatory processes because this helps us to make informed replica watches decisions that are indeed in the public interest,” said Kadi.
The United States has imposed strict limitations on its technology trade with China, with Huawei directly affected by the crackdown.
As a result of this “trade war”, Google announced it has cut off Huawei’s Android licence – a move which would have serious implications for the manufacturer’s smartphone business.
Google released a statement which attributes the revocation of the licence to compliance with US government policy.
“We are complying with the order and reviewing the implications,” the company said.
“For users of our services, Google Play and the security protections from Google Play Protect will continue to function on existing Huawei devices.”
The immediate consequence of this decision will be Huawei’s loss of access to Android updates, which means that existing Huawei smartphones will not be able to receive any official Android OS updates going forward.
According to a report by Reuters, Huawei will only be able to use the open-source version of Android and will lose access to proprietary apps and services from Google.
The services and applications which will be limited by the implementation of this suspension are still being discussed within Google, the report said.
Huawei has stated that it is examining the impact of the US trade blacklist on its products.
While the Chinese manufacturer will be able to use the Android Open Source Project (AOSP) licence to develop its software, this licence does not encompass applications such as Gmail, YouTube, and the Chrome browser.
These applications require a commercial agreement with Google and are available to download through the Google Play Store.
It remains unclear how Huawei will alter its platform following the suspension of its Android licence, but South Africa and other Western markets could be caught in a potentially compromising position.
Huawei may choose to migrate its devices to its own proprietary operating system, which it confirmed it has been developing in case it loses access to Android.
Moving to this new operating system would have a minimal effect in China, where most Google applications are banned and users have adopted Chinese equivalents.
However, the loss of access to YouTube, Gmail, Chrome, and other popular apps could have a devastating effect on users in the Western markets where Huawei operates.
However, regardless of whether Huawei decides to migrate to its backup OS or stay with an open-source version of Android, its Android licence suspension will have a significant impact on Huawei users in South Africa.
The loss of access to popular apps and services is yet to be officially confirmed by Google.
Huawei will support devices
Huawei told MyBroadband that it has made substantial contributions to the development and growth of Android.
“As one of Android’s key global partners, we have worked closely with their open-source platform to develop an ecosystem that has benefitted both users and the industry,” said Huawei.
“Huawei will continue to provide security updates and after-sales services to all existing Huawei and Honor smartphone and tablet products, covering those that have been sold and that are still in stock globally.”
“We will continue to build a safe and sustainable software ecosystem in order to provide the best experience for all users globally.”
The alleged deepest manned sea dive ever recorded showed us just how far down our trash goes.
Explorer Victor Vescovo journeyed 10,928 meters (35,853 feet) to the bottom of the Challenger Deep in the Pacific Ocean’s Mariana Trench, believed to be the deepest point on the planet, on April 28. It’s part of the Five Deeps Expedition, which is charting the ocean’s five deepest areas.
The scientific team identified at least three new species of marine animal during this dive series, including a type of long-appendaged Amphipod, at the bottom of the Challenger Deep.
Unfortunately, Vescovo also spotted a plastic bag and candy wrappers during his four-hour dive in the Limiting Factor submersible, as previously reported by CNN.
“It is almost indescribable how excited all of us are about achieving what we just did,” he said in a release after the completion of the dives. “We feel like we have just created, validated, and opened a powerful door to discover and visit any place, any time, in the ocean — which is 90% unexplored.”
Vescovo also beat James Cameron’s Challenger Deep record — the Titanic director reached a depth of 10,908 meters (35,787 feet) in 2012.
The Five Deeps Expedition will be aired in a five-part Discovery Channel documentary series in late 2019. It already revealed a strange new species of sea squirt at the deepest point of the Indian Ocean.
Facebook has had a tough time since the revelations of Cambridge Analytica data harvesting case came out last year. At the F8 conference this year, the chief executive Mark Zuckerberg promised to execute a “re-plumbing” job to make Facebook and its sister platforms – including WhatsApp and Instagram – more private and secure. But, it looks like the company’s problems as well as its users’ might not be ending anytime soon. In a shocking revelation, we have learned that a vulnerability in the WhatsApp messenger may have allowed hackers to install spyware on users’ smartphones to snoop on so-called end-to-end encrypted chats.
Financial Times (paywall) reports that a vulnerability in WhatsApp voice calling feature allowed attackers to remotely execute a code that would install spyware on any iPhone or Android smartphone. This could be accomplished even if the targets did not pick up the call. A WhatsApp spokesperson said that the security team has patched the issue but insists users update their apps at the soonest possible.
The publication alleges although the creator of this exploit is unclear, it resembles other products by Isreali company NSO Group, which has been previously accused of providing spyware to wiretap the conversations of human right activist and journalists. NSO Group is infamous as the creator of a powerful tool called Pegasus, which can be used by intelligence agencies worldwide to eavesdrop on suspects. It was also alleged to have helped the Saudi government track the conversations of opposers of the autocratic regime and dissidents and the list of targets includes the slain Wall Street Journal reporter Jamal Khashoggi. The company claims that its products are sold to government agencies for fighting against terrorism and is been facing multiple lawsuits on grounds of illegal hacking.
Earlier this month, when WhatsApp’s engineers were trying to fix the vulnerability, the came across unusual voice calling activity, which is when they grew wary of the gravity of this situation. This was reportedly an attack used to target a London-based human rights lawyer involved in lawsuits against NSO Group. The lawyer, whose name was not shared, was representing individuals including a bunch of activists, journalists, and dissidents whose smartphones have previously been sabotaged by NSO’s Pegasus.
Besides releasing a fix for the vulnerability on Monday, WhatsApp also alerted the U.S. Justice Department about the possibility that similar tools could be in use for targeting users in the country.
Make sure you update WhatsApp to stay safe against any attacks on your smartphone.
Vodacom released its financial results for the year ended 31 March 2019, which revealed that the company increased its capital expenditure in South Africa to R9.6 billion.
“Our capital expenditure of R9.6 billion was utilised to drive our strategy of being the leading digital telco,” Vodacom said.
Vodacom said it focused on improving overall mobile network performance and customer experience through network modernisation and capacity upgrade initiatives.
“We delivered substantial cost savings through the introduction of Digital Technologies for smart planning, smart deployment and smart operations,” Vodacom said.
Thanks to its continued infrastructure investment, Vodacom increased its 4G population coverage to 90% and its 3G coverage to 99.5%.
The company also spent R2 billion on IT over the past year, with a strong focus on becoming smarter and more agile in delivering products.
“We continued to deepen our Digital IT capabilities through our IT acceleration programme,” the company said.
“We continue to invest in Cloud infrastructure and migrating applications, IT services and network functions into Cloud platforms to enhance flexibility and improve scalability, availability and performance of services,” it said.
One of the best ways to understand the potential of the Google Assistant is to watch how fast the voice-activated helper can now bring up Beyonce’s Instagram page.
“Hey, Google,” says Meggie Hollenger, a Google program manager, using the wake words that trigger the software on her smartphone. Then it’s off to the races as she shoots off 12 commands in rapid-fire succession.
“Open the New York Times … Open YouTube … Open Netflix … Open Calendar … Set a timer for 5 minutes … What’s the weather today? … How about tomorrow? … Show me John Legend on Twitter … Show me Beyoncé on Instagram … Turn on the flashlight … Turn it off … Get an Uber to my hotel.”
As she asks each question, the phone pops up the new information. The whole sequence takes 41 seconds. She doesn’t have to repeat the wake words between commands. When she makes the request to see what Beyonce is up to, the Assistant not only launches the Instagram app, it automatically takes us directly to the pop star’s page so I can see the latest photos she’s shared with her 127 million followers. Likewise, when Hollenger asks for an Uber, the software already knows where she’s staying.
Three years after CEO Sundar Pichai introduced his AI-driven virtual assistant to the world, Google is previewing the “next-generation” of the Assistant at its annual I/O developer conference on Tuesday. The Google Assistant can now deliver answers up to 10 times faster than it did before. A big boost of speed could help turn around the perception that voice assistants are too laggy and inaccurate. That’s a big deal if companies like Google and Amazon want to take these digital helpers further into the mainstream.
Making Google Assistant a success is key for the world’s biggest search service, which delivers answers to over a trillion searches a year. Many of us are moving away from looking for information by typing on our computers and are instead talking to our smartphones and smart speakers. Google is now racing with Amazon, and its Alexa voice assistant, and Apple, with Siri, to give us the instant gratification we increasingly expect from our always-connected devices.
That’s why Google invited me to its global headquarters in Mountain View, California, a few days before I/O to see the biggest update yet of its make-or-break Assistant.
It’s fascinating — and a little bit scary.
The next-gen digital assistant is the headliner in a new slate of features that showcase Google’s world-class artificial intelligence and engineering chops. The Assistant isn’t only faster, but smarter, with Google counting on breakthroughs it’s made in neural network research and speech recognition over the past five years to set itself apart from rivals.
And it’s getting more personal. You’ll be able to add family members to a list of close contacts. When you ask the Assistant for directions to your mom’s house, for instance, it knows who your mom is and where she lives. Another feature, an update to last year’s eerily human-soundingDuplex voice concierge, lets the Assistant automatically fill out forms on the web after you make a verbal request for actions like booking a rental car or ordering movie tickets.
“We could potentially see a world where actually talking to the system is a lot faster than tapping on the phone,” says Manuel Bronstein, vice president of product for the Google Assistant. “And if that happens — when that happens — you could see more people engaging.”
But all that highlights the massive cache of data Google already holds on billions of people across the planet. It also underscores how much more personal information it’s going to need to collect from us to bring the true vision of Google Assistant to life.
The Assistant is now on 1 billion devices, mostly because it comes preinstalled on phones running Android, the world’s most popular mobile operating system. Many of Google’s other services — Gmail, YouTube, Maps, the Chrome browser — also serve more than 1 billion people a month. All these services are useful and innovative, but their lifeblood is the data you feed the company every day through your search history, email inbox, video viewing habits and driving directions.
Of course, this is all predicated on the Assistant actually working as billed. Google wouldn’t let me try it for myself, and my colleagues and I weren’t allowed to video record the demo. Instead, Google provided us with a preshot marketing video. Hollenger also read from a script, following a cheat sheet of written commands. So it’s unclear how deft the software would be in carrying out the sometimes meandering requests of regular people on their mobile phones and smart home devices.
The demo even had a few stumbles. While the jumps from app to app are snappy, Hollenger had to repeat queries once or twice because the software didn’t process her requests on the first try. In other demos, though, Hollenger used the Assistant to dictate texts and emails with hyper accuracy. The system can also tell the difference between what she wants written in the email and what’s a general command. For example, when she says, “Send it,” the software sends the email instead of typing “Send it” in the email body.
Still, the Assistant is sure to be the subject of discussion — and perhaps controversy.
“There are positives and negatives and tradeoffs,” says Betsy Cooper, director of the Aspen Tech Policy Hub. “With the Google Assistant, since it’s always listening [for a wake word], there’s always the possibility that they could abuse that privilege.”
‘Your own individual Google’
The new Assistant is the culmination of five years of work, says Francoise Beaufays, a principal scientist at Google. That’s longer than this software has been around. Over those five years, Google researchers have made key advances in AI audio, speech, language recognition and voice control.
“What we did was reinvent the whole stack, using one neural network that does the whole thing,” says Beaufays.
It’s a major technological breakthrough, bringing down the space needed from 100 gigabytes to less than half a gigabyte. Still, the souped-up digital helper requires hefty computing power for a phone, so it will only be available on high-end devices. Google will debut the product on the next premium version of its flagship Pixel phone, expected in the fall.
Days before he unveiled the Assistant in May 2016, I sat down with Pichai in his glass-walled office, secluded within the sprawling Googleplex, to hear his pitch. The search giant, already years late to the digital voice assistant game, was finally getting ready to jump into the ring with Siri and Alexa.
From the very beginning, Pichai was adamant it was much more than that. For Google, the Assistant is about breaking past the company’s iconic white homepage and spilling its engineering smarts into every piece of tech you own — your phone, your car, your washing machine.
“It’s Google asking users, ‘Hi. How can I help?'” he said at the time. “Think of it as building your own individual Google.”
Now as Pichai ushers in a new phase for the Assistant — including the feature that knows specific details about your family — it’s clearer than ever that when he said “your own individual Google,” he meant it.
Google wouldn’t make Pichai available for an interview.
Of course, the world is a much different place than it was three years ago.
Then there’s the public debate on privacy and security. Lawmakers and consumers are taking a harder look at the policies of big tech companies after Facebook’s Cambridge Analytica scandal, which brought data collection issues to the forefront throughout 2018. Google was criticized just last month for its Sensorvault database, which helps measure the effectiveness of lucrative targeted ads served to you based on the personal information Google knows about you. It turns out that police departments across the country have tapped Sensorvault for location data when trying to crack criminal investigations. In response, a US House of Representatives committee sent a letter to Pichai demanding answers about the database. Lawmakers have asked for an in-person briefing by May 10.
When I asked during a product briefing last week what Google would do if law enforcement asked for data on family relationships and other info collected by Assistant, a spokesman said that Google doesn’t have anything to share on that front.
Bronstein, the product head for the Assistant, says Google constantly has “very good debates” about storing data for advertising purposes. The philosophy, he says, is “Don’t store the information for the sake of storing it. Store it because you think it can deliver value.”
He adds, “We want to be very transparent with all those things, so that you know when this is going to be used for advertising or is … never going to be used for advertising.”
But privacy experts say Google should do a better job communicating its policies to consumers.
“I don’t know how well people actually understand,” said Jen King, director of consumer privacy at the Stanford Center for Internet and Society. She adds that the company should give people more options to opt out of data collection, instead of lumping things together.
Google has already been challenged on how it deals with transparency. Last year, the Associated Press reported that Google tracked people’s location even after they’d turned off location-sharing on their smartphones. The data was stored through a Google Maps feature called “Location History,” the same feature at issue in the Sensorvault database. Critics like the ACLU said Google was being disingenuous with its disclosures. The company later revised a help page on its website to clarify how the settings work. Last week, Google announced a feature that lets people auto-delete location, web and app history.
Bronstein also says a “small fraction” of voice queries from the Assistant are shared with a team at Google that works on improving the AI system, if users allow for that in the settings. He didn’t provide any details about how many “small” is. But he did say that in those cases, personal information is stripped from the voice audio.
The evolution of Duplex
In addition to giving the Assistant a jolt of speed, Google is also updating the project that stoked the most controversy at last year’s conference: Duplex.
The feature uses unnervingly human-sounding AI software to call businesses to book reservations and appointments on the behalf of Google Assistant users. Its AI mimics human speech, using verbal tics like “uh” and “um.” It speaks with the cadence of a real person, pausing before responding and elongating certain words as though it’s buying time to think.
Last year’s demo immediately raised flags for AI ethicists, industry watchers and consumers, who worried about the robot’s ability to deceive people. Google later said it would build in disclosures so people would know they were talking to automated software.
This new iteration is a lot tamer.
Google on Tuesday is updating Duplex to streamline bookings for more types of things, such as car rentals and movie tickets. But this time there are no human-sounding robots. It basically automates the process of filling out forms you’d find on the mobile web — think of it like autofill on steroids.
Here’s how it works: You say something like “Hey Google, get me a rental car from National for my next trip.” The Assistant then pulls up National’s website on your phone and starts filling out the fields in real time.
Throughout the process, you’ll see a progress bar, just like one you’d see if you were downloading a file. Whenever Duplex needs more information, like a price or seat selection, the process pauses and prompts you to make a selection. When the form is filled, you tap to confirm the booking or payment. Like other Assistant features, the system fills out the form by using data culled from your calendar, Gmail inbox and Chrome autofill (that includes your credit card information). The update launch later this year on Android phones.
While this version will probably cause less blowback, last year’s widespread recoil was a key moment for Google, Scott Huffman, head of engineering for the Google Assistant, told me earlier this year. “The strength of the reaction surprised me,” he said. “It made it clear to us how important those societal questions are going forward.”
There’s other stuff coming for the Assistant, too. Google on Tuesday also unveiled a new “driving mode” for Android phones. When you activate it, the user interface puts a few items front and center that you’re likely to use while driving. Those include navigation directions for Google Maps and Waze, music controls and reminders of missed calls. When you’ve got navigation directions up, your music or phone call controls sit at the bottom of the screen, so you don’t have to fiddle with your phone to find them.
‘Rules of the road’
Taken as a whole, Google’s new Assistant announcements could have a hefty impact on how we use tech.
Making voice commands easier and faster could change the way we interact with devices, just as when smartphones, led by Apple’s iPhone, became mainstream over a decade ago and sparked the age of touchscreen everything.
Perhaps we may look back at this as the first step toward a world in which people are constantly talking to inanimate objects. (It reminds me of those videos of toddlers holding magazines, trying to swipe at them like they’re iPads. In the future, kids could talk to a candle or chair and be surprised when it doesn’t talk back.)
The next-gen Assistant could also set a foundation for new habits around voice queries. Last year, Google announced “continued conversation” for voice commands, which keeps the mic open for eight seconds after a query so you can ask a follow-up question. The next-gen Assistant builds on that concept and could eventually forge a path for getting rid of wake words. (Huffman told me earlier this year that he thinks wake phrases like “Hey Google” are “really weird” and unnatural.)
That open mic would likely spark privacy concerns. Bronstein says it’s helpful to keep the microphone open for a little while — the company is still tuning how long that duration will be — but he wants people to be “intentional” when they’re talking to it. “You don’t necessarily want this thing to be transcribing everything you’re saying,” he says. “Because you wouldn’t feel comfortable.”
There are many other ways Google could advance the Assistant. Huffman told me earlier this year he’s interested in having the software remember an exact discussion you had with it yesterday, so that today you can pick up where you left off. He even wants the Assistant to be able to detect your mood and tone.
Whether that’s frightening or not, it’s how Google is thinking about evolving the Assistant. For now, though, Bronstein says he’s focused on making the experience more seamless, and figuring out what features will be valuable to users before adding that future-looking stuff.
In the meantime, people will have to work through all the issues that come with large-scale data collection and smarter-than-ever tech, and Google knows that. As Huffman told me earlier: “With AI, we’re going to end up with society thinking through some of the rules of the road.” ●
Unity is one of the most popular IDE and game engine used by game developers to create games for Android and other platforms. While game development is practically an art by itself, Unity makes the process simpler thanks to the tools and features it provides to build 2D and 3D environments and complex mechanics across multiple platforms. Unity 2019.1 (19.1 in short) is now available for game developers, bringing over several “preview” features over in a stable form for game developers to implement them into their game, as well as new preview features of its own.
One of the highlight features of this release for Android is the availability of a preview version of Adaptive Performance for Samsung Galaxy flagships. Unlike PC and consoles, gaming on mobile devices has an inherent limitation of heat management and power consumption. Beautiful-looking and smooth-playing games have intensive processing needs, which can quickly heat up your device. PC and consoles tackle this issue through their active cooling systems, but since phones do not feature active cooling hardware (yet), the phone ends up throttling performance to keep the temperature in check. The issue becomes even more problematic considering the wide range of hardware available, and the varying performance and throttling scenarios.
Game developers tackle this issue through two main approaches: ensuring maximum compatibility by sacrificing graphic fidelity and frame rate, or by anticipating hardware behavior, which is difficult to execute.
Unity and Samsung have collaborated for a feature called “Adaptive Performance“, which provides a better way to manage thermals and performance of games in real time. After you install Adaptive Performance through the Unity Package Manager, Unity will automatically add the Samsung GameSDK subsystem to your project. During runtime and on supported devices, Unity will create and start an Adaptive Performance Manager which will provide feedback about the thermal state of the device. Developers can then choose to subscribe to events or query the information from the Adaptive Performance Manager during runtime to create reactions in real-time with regards to thermal trends. For instance, when the device began throttling in the early stages, the game could tune quality settings, target frame rate and other parameters to ensure that the game can eke out more sustained performance. Once temperature starts declining again, parameters could be tweaked once again to deliver better gameplay performance. By keeping a closer eye on the thermal performance, one can avoid throttling all together by adjusting performance based on real-time feedback. This will lead to a more predictable frame rate and gameplay experience and lower thermal buildup.
A preview version of Adaptive Performance is available for Unity 2019.1, with support for the Galaxy S10 and Galaxy Fold. Support for more Galaxy devices will follow later in the year, and a representative mentioned to Android Authority that Unity is also speaking with other manufacturers.
The Mobile Notifications Preview package will help developers implement retention mechanics and timer-based gameplay by adding support scheduling local repeatable or one-time notifications on Android 4.1 and above.
Android SDK and NDK installation through Unity Hub
The Unity Hub now lets developers install all the required components for Android as part of the Android Build Support option, ensuring that they get the correct dependencies. You also have the option of installing and configuring components manually and use Android Studio.
Android Logcat integration
Unity 2019.1 now integrates logcat functionality, making it easier to debug by controlling and filtering messages from within Unity.
Faster iteration with Scripts Only Build patching on Android
You can now use the Scripts Only Build option to skip through several steps in the build process as it recompiles only scripts and patches an already-existing app package on the device. The final package is built and deployed when you select Build and Run.
Many more platform-independent features
The features listed above are for game development on Android. Unity 2019.1 also packs in several more changes that apply to the whole game engine, extending the benefits to Android as well as other platforms. Unity has posted an extensive change list, with emphasis on features like Burst Compiler, Lightweight Render Pipeline, Shader Graph and so much more.
Engen’s new “Big Bird” 1-Stop has also been launched in the New Road complex, and includes an Engen Quickshop, Wimpy, Corner Bakery, Andiccio 24, Kauai, KFC, Woolworths Foodstop, and Schoon Bakery – along with ATMs.
Collection points and pricing
At the time of writing, the following pickup points were listed by Takealot:
Takealot said that the collection point option will be presented to customers whose order items are all eligible for collection, and that orders of R450 or more will qualify for a “free collection”.
The collection of orders which total under R450 will cost customers R25. Collection from the Takealot Cape Town warehouse is still free.
Customer will have seven days to collect their order once it has been delivered to a pickup point. If the seven days have elapsed and the order has not been collected, it will be returned to the Takealot warehouse and the customer’s account will be credited.
Items which are not available for collection include liquor and large appliances, stated Takealot.
The company’s FAQ page states that customers may also nominate a person to collect their order on their behalf, provided they have the QR code or unique PIN linked to their order.
Once an order has been placed and the collection point selected, it is not possible to change collection points or change the order to a home or office delivery, states the FAQ page.
Intel promises to continue to meet current customer commitments for its existing 4G smartphone modem product lines, but it no longer expects to launch 5G modem products in the smartphone space, including those originally planned for launch in 2020.
“We are very excited about the opportunity in 5G and the ‘cloudification’ of the network, but in the smartphone modem business, it has become apparent that there is no clear path to profitability and positive returns.”
Intel’s announcement brings down the surprise factor in Apple and Qualcomm’s settlement. Intel’s exit from the 5G space means that Apple will have to look elsewhere for the technology, which just so happens to be Qualcomm’s current forte. It is certain now that Apple will have no choice but to adopt Qualcomm modems in the iPhone, including Qualcomm’s 5G components that will likely make their way to Apple’s smartphones next year.
As part of the settlement, Apple has agreed to pay an unspecified amount to Qualcomm and both parties have struck a 6-year patent license deal and “multiyear” wireless chipset deal.