snapchat

Snapchat+ Surges Ahead: Premium Service Surpasses X Blue in Revenue thinkitnow.in

Snapchat’s premium subscription, Snapchat+, is experiencing remarkable growth, exceeding even Twitter Blue in revenue. Data from Apptopia reveals that, in November, Snapchat+ reached a record high of over $20 million in net revenue (after app store fees), surpassing X Premium’s $6.2 million. This sustained growth indicates that younger users are increasingly embracing the subscription’s perks, including enhanced story tools, early access to AI features like My AI, and the ability to pin Best Friends and change app icons.

Key takeaways:

  • Snapchat+ revenue surpasses Twitter Blue: In November, Snapchat+ generated over three times the in-app revenue of Twitter Blue, demonstrating its stronger user base and monetization strategy.
  • Over 5 million subscribers: Since its launch, Snapchat+ has attracted over 5 million subscribers, showcasing its appeal among younger users.
  • Early access to AI features: Exclusive access to AI features like My AI and Dreams is driving further subscriber growth.
  • International popularity: The top markets for Snapchat+ revenue include the US, UK, France, Australia, and Canada, with the US leading with $1.8 million net revenue.
  • Unexpected traction: Saudi Arabia’s ranking as the No. 7 market by consumer spending highlights the subscription’s reach beyond established markets.
  • Underestimated revenue: Gift card purchases via Amazon are not tracked by Apptopia, suggesting the actual revenue could be even higher.

The future of Snapchat+:

  • Continuous subscriber growth and revenue increases suggest a bright future for Snapchat’s premium offering.
  • Introduction of new features and exclusive content will likely attract even more users.
  • Snapchat+’s success provides valuable insights for other social platforms seeking to explore subscription models.

snapchat

Image Credits: Apptopia

Overall, Snapchat+’s impressive growth demonstrates the potential of premium subscriptions in the social media landscape. By offering valuable features and catering to its targeted audience, Snapchat has created a successful monetization strategy that sets it apart from its competitors.

Gag City is a viral win for Nicki Minaj thinkitnow.in

Gag City is a viral win for Nicki Minaj thinkitnow.in

Welcome to Gag City, the pink metropolis inhabited by stans and brands alike.

In the days leading up to the release of “Pink Friday 2,” Nicki Minaj’s fifth studio album and sequel to her debut record “Pink Friday” that dropped on Friday, Twitter was flooded with AI-generated images of pink-toned cityscapes. Gag City, the dreamy false utopia ruled by Minaj and her Barbz, broke through stan Twitter and became a viral meme that brand accounts immediately used for their own marketing — promoting Minaj’s album for free.

Is it an authentic stan-led campaign to build hype for Minaj? Is it a plant to game engagement for both the album and brands? What’s clear is that the viral moment is a win for Minaj, manufactured or not.

It started in September, when Minaj teased the album’s cover art online. The image features Minaj on a pink subway car, drifting through pink clouds with a futuristic (and obviously, pink) city skyline in the background.

She and her Barbz started referring to the album’s release as “Gag City,” NBC News reports, referencing gay slang for being so amazed that you’re at a loss for words. One might be gagged by witnessing a stunning outfit change, or by listening to a perfect record, like “Pink Friday 2.” Leading up to the release, stans started posted AI-generated images of a pink concrete jungle, joking that fictional characters and celebrities were arriving to Gag City in anticipation of Minaj’s album. In one of the first, posted on Dec. 1 according to Know Your Meme, a fan account shared an image of a pink plane labeled “Gagg City” flying over a similarly pink skyline.

In the days before the release, Minaj told fans to “prepare for landing” and teased a description of her pink utopia. Barbz replied with AI-generated renditions of the descent into Gag City.

X (formerly Twitter) users began crafting elaborate narratives about Gag City’s inhabitants and government. One posted an image of Barbz storming the Pink House, which another user described as the fandom’s own January 6th. Another posted an image of pink-clad citizens protesting in the streets of Gag City, calling for Minaj to release the album’s track list. Though some may believe that Gag City is a utopia, one account posted an image of a matronly Minaj handing out CDs of her album to impoverished children “on the outskirts of Gag City,” implying that the pink society also has a class divide problem.

Gag City is also riddled with stan wars, as fans of rival pop stars posted images of their faves vying for Minaj’s seat at the head of her city’s government. In a nod to Greek mythology, one account posted an image of a Trojan horse decorated in Beyoncé’s “Renaissance” disco motif.

Never one to miss out on an easy trend, brand accounts started joining in on the Gag City hype. Chili’s posted an image of rosy smoke billowing from its restaurant, which makes me wonder if air pollution exists in Gag City. Wheat Thins, Baskin-Robbins, Dunkin’ Donuts, Pizza Hut, Red Lobster, Oreo, Bing (??), the Empire State Building and countless others posted their versions of Gag City.

On one hand, memes tend to die the minute brand accounts start co-opting them — nothing is more tiresome than seeing a fun joke turn into a corporate-friendly marketing ploy. AI-generated images are already ethically fraught, and critics have raised concerns over AI generators trained on artwork without the consent of the work’s artists. Artists have also criticized brands for using AI-generated art instead of commissioning work from a real, human artist. Though using AI-generated art for commercial use is legal, as copyright laws pertaining to AI are virtually non-existent, it’s generally seen as a shitty move by many in the art world.

On the other, it’s free promotion for Minaj, and as a lifelong Barb who spent her adolescence running a stan account for “Pink Friday,” I consider it a win.

Nicki Minaj is an artist who’s been embroiled in controversy throughout her career, from posting bad takes about Covid vaccines to defending her husband Kenneth Petty, a convicted sex offender. She may be a brilliant artist, but her problematic history makes her far from the family-friendly public figure that brands are more likely to endorse.

But with Gag City, Minaj has brands doing all of her marketing for her. “Pink Friday 2” is an artistic marvel in itself (though I am probably biased), but the free promotion that it’s been getting as a viral meme is particularly astounding. Artists have spent the last few years trying to drum up engagement for their work by making their songs trend on TikTok, which audiences have started to resist. Gag City doesn’t bank on being the viral song of the summer to drive streaming numbers — the bit is removed enough for non-stans to enjoy it, while still revolving around the album it’s promoting.

Brand Twitter tends to turn fun trends into advertising opportunities, taking organic community interactions and spitting out contrived versions clearly made to go viral. It may be grating, but in this case, it’s working in Minaj’s favor. This week, everyone wants to go to Gag City.

EU lawmakers bag late night deal on 'global first' AI

EU lawmakers bag late night deal on ‘global first’ AI rules thinkitnow.in

After marathon ‘final’ talks which stretched to almost three days European Union lawmakers have tonight clinched a political deal on a risk-based framework for regulating artificial intelligence. The file was originally proposed back in April 2021 but it’s taken months of tricky three-way negotiations to get a deal over the line. The development means a pan-EU AI law is definitively on the way.

Giving a triumphant but exhausted press conference in the small hours of Friday night/Saturday morning local time key representatives for the European Parliament, Council and the Commission — the bloc’s co-legislators — hailed the agreement as hard fought, a milestone achievement and historic, respectively.

Taking to X to tweet the news, the EU’s president, Ursula von der Leyen — who made delivering an AI law a key priority of her term when she took up the post in late 2019 — also lauded the political agreement as a “global first”.

Full details of what’s been agreed won’t be entirely confirmed until a final text is compiled and made public, which may take some weeks. But a press release put out by the European Parliament confirms the deal reached with the Council includes a total prohibition on the use of AI for:

  • biometric categorisation systems that use sensitive characteristics (e.g. political, religious, philosophical beliefs, sexual orientation, race);
  • untargeted scraping of facial images from the internet or CCTV footage to create facial recognition databases;
  • emotion recognition in the workplace and educational institutions;
  • social scoring based on social behaviour or personal characteristics;
  • AI systems that manipulate human behaviour to circumvent their free will;
  • AI used to exploit the vulnerabilities of people (due to their age, disability, social or economic situation).

The use of remote biometric identification technology in public places by law enforcement has not been completely banned — but the parliament said negotiators had agreed on a series of safeguards and narrow exceptions to limit use of technologies such as facial recognition. This includes a requirement for prior judicial authorisation — and with uses limited to a “strictly defined” lists of crime.

Retrospective (non-real-time) use of remote biometric ID AIs will be limited to “the targeted search of a person convicted or suspected of having committed a serious crime”. While real-time use of this intrusive AI tech will be limited in time and location, and can only be used for the following purposes:

  • targeted searches of victims (abduction, trafficking, sexual exploitation),
  • prevention of a specific and present terrorist threat, or
  • the localisation or identification of a person suspected of having committed one of the specific crimes mentioned in the regulation (e.g. terrorism, trafficking, sexual exploitation, murder, kidnapping, rape, armed robbery, participation in a criminal organisation, environmental crime).

The package agreed also includes obligations for AI systems that are classified as “high risk” owing to having “significant potential harm to health, safety, fundamental rights, environment, democracy and the rule of law”.

“MEPs successfully managed to include a mandatory fundamental rights impact assessment, among other requirements, applicable also to the insurance and banking sectors. AI systems used to influence the outcome of elections and voter behaviour, are also classified as high-risk,” the parliament wrote. “Citizens will have a right to launch complaints about AI systems and receive explanations about decisions based on high-risk AI systems that impact their rights.”

There was also agreement on a “two-tier” system of guardrails to be applied to “general” AI systems, such as the so-called foundational models underpinning the viral boom in generative AI applications like ChatGPT.

As we reported earlier the deal reached on foundational models/general purpose AIs (GPAIs) includes some transparency requirements for what co-legislators referred to as “low tier” AIs — meaning model makers must draw up technical documentation and produce (and publish) detailed summaries about the content used for training in order to support compliance with EU copyright law.

For “high-impact” GPAIs (defined as the cumulative amount of compute used for their training measured in floating point operations is greater than 10^25) with so-called “systemic risk” there are more stringent obligations.

“If these models meet certain criteria they will have to conduct model evaluations, assess and mitigate systemic risks, conduct adversarial testing, report to the Commission on serious incidents, ensure cybersecurity and report on their energy efficiency,” the parliament wrote. “MEPs also insisted that, until harmonised EU standards are published, GPAIs with systemic risk may rely on codes of practice to comply with the regulation.”

The Commission has been working with industry on a stop-gap AI Pact for some months — and it confirmed today this is intended to plug the practice gap until the AI Act comes into force.

While foundational models/GPAIs that have been commercialized face regulation under the Act, R&D is not intended to be in scope of the law — and fully open sourced models will have lighter regulatory requirements than closed source, per today’s pronouncements.

The package agreed also promotes regulatory sandboxes and real-world-testing being established by national authorities to support startups and SMEs to develop and train AIs before placement on the market.

Penalties for non-compliance can lead to fines ranging from €35 million or 7% of global turnover to €7.5 million or 1.5 % of turnover, depending on the infringement and size of the company.

The deal agreed today also allows for a phased entry into force after the law is adopted — with six months allowed until rules on prohibited use cases kick in; 12 months for transparency and governance requirements; and 24 months for all other requirements. So the full force of the EU’s AI Act may not be felt until 2026.

Carme Artigas, Spain’s secretary of state for digital and AI issues, who led the Council’s negotiations on the file as the country has held the rotating Council presidency since the summer, hailed the agreement on the heavily contested file as “the biggest milestone in the history of digital information in Europe”; both for the bloc’s single digital market — but also, she suggested, “for the world”.

“We have achieved the first international regulation on artificial intelligence in the world,” she announced during a post-midnight press conference to confirm the political agreement, adding: “We feel very proud.”

The law will support European developers, startups and future scale-ups by giving them “legal certainty with technical certainty”, she predicted.

Speaking on behalf of the European Parliament, co-rapporteurs Dragoș Tudorache and Brando Benifei said their objective had been to deliver AI legislation that would ensure the ecosystem developed with a “human centric approach” which respects fundamental rights and European values. Their assessment of the outcome was equally upbeat — citing the inclusion in the agreed text of a total ban on the use of AI for predictive policing and for biometric categorization as major wins.

“Finally we got in the right track, defending fundamental rights to the necessity that is there for our democracies to endure such incredible changes,” said Benifei. “We are the first ones in the world to have a horizontal legislation that has this direction on fundamental rights, that supports the development of AI in our continent, and that is up to date to the frontier of the artificial intelligence with the most powerful models under clear obligation. So I think we delivered.”

“We have always been questioned whether there is enough protection, whether there is enough stimulant for innovation in this text, and I can say, this balance is there,” added Tudorache. “We have safeguards, we have all the provisions that we need, the redress that we need in giving trust to our citizens in the interaction with AI, in the products in the services that they will interact with from now on.

“We now have to use this blueprint to seek now global convergence because this is a global challenge for everyone. And I think that with the work that we’ve done, as difficult as it was — and it was difficult, this was a marathon negotiation by all standards, looking at all precedents so far — but I think we delivered.”

The EU’s internal market commissioner, Thierry Breton, also chipped in with his two euro-cents — describing the agreement clinched a little before midnight Brussels’ time as “historic”. “It is a full package. It is a complete deal. And this is why we spent so much time,” he intoned. “This is balancing user safety, innovation for startups, while also respecting… our fundamental rights and our European values.”

Despite the EU very visibly patting itself on the back tonight on securing a deal on ‘world-first’ AI rules, it’s not quite yet the end of the road for the bloc’s lawmaking process as there are still some formal steps to go — not least the final text will face votes in the parliament and the Council to adopt it. But given how much division and disagreement there has been over how (or even whether) to regulate AI it’s clear the biggest obstacles have been dismantled and the path to passing the EU AI Act in the coming months looks clear.

The Commission is certainly projecting confidence. Per Breton, work to implement the agreement starts immediately with the set up of an AI Office within the EU’s executive — which will have the job of coordinating with the Member State oversight bodies that will need to enforce the rules on AI firms. “We will welcome new colleagues… a lot of them,” he said. “We will work — starting tomorrow — to get ready.”

Deep tech startups must use these 4 techniques when fundraising

Unveiling the Success Story: Four Techniques that Secured $40 Million for My Deep Tech Venture thinkitnow.in

In the fast-paced world of deep tech startups, securing funding in the face of a global downturn demands innovative approaches. As of 2023, early-stage startups grapple with a 15% decline in global investments, creating an uphill battle for those pioneering novel solutions. However, amid this challenging landscape, my international deep tech venture successfully secured a $40 million seed round, standing as a testament to the effectiveness of strategic techniques in attracting diverse investments.

The Current Funding Challenge

The Global Scenario

In 2023, Crunchbase reports a substantial 15% downturn in global investments, painting a challenging backdrop for early-stage startups worldwide. The repercussions of this economic climate are especially pronounced for deep tech startups striving to introduce innovative solutions in emerging markets.

European Deep Tech Funding Trends

Dealroom’s insights shed light on the funding journey of deep tech companies in Europe. While 2021 witnessed a surge in interest in AI, leading to increased funding, the trend reversed in 2022, and this downward trajectory continues into 2023.

Four Techniques for Securing $40 Million

1. Crafting a Compelling Narrative

The Power of Storytelling

A pivotal aspect of attracting potential investors is the preparation of a comprehensive set of documents, with the pitch deck taking center stage. While pitch decks are essential for presenting the core idea and market dynamics, our journey revealed the need for an additional narrative format — the “product book.”

The Role of the Product Book

Unlike a pitch deck, a product book delves deeper into the intricacies of the product, providing a more elaborate explanation. In crafting this immersive guide, key elements include:

  • Problem Statement: Clearly articulate the problem your revolutionary product addresses.
  • Possible Solution: Paint a picture of the solution without delving into technical details.
  • Why Now: Explain the timing, detailing why achieving the same goal was impossible before and what has changed.
  • Competitors: Highlight shortcomings in existing solutions, emphasizing your product’s unique value proposition.
  • Ecosystem: Outline the necessary ecosystem for your product’s functionality, showcasing foresight.
  • Use Cases: Explore diverse applications of your product across various industries.
  • Visuals: Present each application vividly, utilizing impactful visuals.
  • Components: Explain the feasibility of the solution by addressing key components.

2. Negotiating with Diverse Entities

Venture Funds and Business Angels

Our funding success involved strategic negotiations with a diverse range of entities, including venture funds and business angels. This approach ensures a well-rounded financial backing, leveraging the expertise and networks of these entities.

3. Market Dynamics Insight

Adapting to Shifting Markets

Navigating the dynamic landscape of funding requires a nuanced understanding of market dynamics. While the interest in AI spurred funding in 2021, shifts in market sentiments in subsequent years demand adaptability and continuous assessment of emerging trends.

4. Proving Feasibility with a Vision

Addressing Technical Complexity

In the realm of deep tech, proving the feasibility of a technically sophisticated product is a challenge. Introducing a clear vision of the product’s components and its impact on diverse use cases helps bridge the gap between complexity and investor understanding.

 

Conclusion: Navigating Challenges with Innovation

As deep tech startups face the headwinds of a global investment downturn, innovative approaches become the lifeline for securing crucial funding. The success of my international deep tech venture, securing a $40 million seed round, underscores the significance of crafting compelling narratives, negotiating strategically, staying attuned to market dynamics, and proving feasibility through a visionary approach.

In the ever-evolving landscape of deep tech, the ability to tell a compelling story, navigate diverse negotiations, understand market nuances, and articulate a clear vision are the pillars supporting successful fundraising endeavors.

X's AI chatbot Grok now 'rolled out to all' US

Unveiling Grok: Elon Musk’s AI Chatbot Revolutionizes X’s Premium+ Experience thinkitnow.in

In a groundbreaking move, X, the social media platform formerly known as Twitter, has rolled out Grok, the rebellious AI chatbot developed by Elon Musk’s xAI startup. This innovative feature is exclusively available to Premium+ subscribers in the U.S., marking a significant leap in the platform’s evolution. As Musk promises rapid improvements and a global expansion plan, let’s delve into the details of Grok’s launch and its potential impact on X’s future.

The Grok Rollout Journey

1. Beta Launch and Musk’s Caution

On December 7, 2023, Grok’s beta version was introduced to all U.S. Premium+ subscribers. Elon Musk, the visionary behind this development, acknowledged that there would be initial issues but expressed confidence in steady improvements. Musk, known for ambitious timelines, cautioned users about the beta’s challenges while promising a refined experience over time.

2. Global Expansion Plans

Musk envisions a swift rollout beyond the U.S., with all English language Premium+ subscribers gaining access to Grok in approximately a week. Japanese users, representing X’s second-largest user base, are slated to follow suit. The ambitious goal is to make Grok available in “hopefully” all languages by early 2024, signaling a commitment to global inclusivity.

Assessing Musk’s Track Record

1. Timeliness and Past Launch Estimates

Elon Musk’s reputation for setting ambitious timelines, as witnessed in Tesla’s Full Self-Driving (FSD) promises, often leads to skepticism. However, with Grok, Musk has remained relatively on schedule. Despite initial projections for a launch “next week” on November 22, the rollout occurred on December 7, demonstrating a slight delay but a generally timely execution.

Grok’s Position in X’s Subscription Tiers

1. Premium+ Subscription and Features

Grok is an exclusive feature of X’s Premium+ subscription tier, priced at $16 per month. This top-tier offering provides users with an ad-free experience in the For You and Following timelines. Additionally, Premium+ subscribers enjoy boosted replies, ads revenue sharing for creators, ID verification, a verified checkmark, access to Media Studio, and more.

2. Appeal and Potential Challenges

While Grok introduces a new level of interactivity and engagement, its exclusivity to the Premium+ tier raises questions about its appeal to a broader audience. With free alternatives like ChatGPT and Google’s Bard available, the $16 monthly price tag might deter casual AI enthusiasts.

X’s Revenue Landscape

1. Historical Revenue Sources

Historically, X’s revenue has been driven primarily by advertising rather than subscriptions. However, recent clashes between Elon Musk and X’s advertisers have sparked uncertainties about the platform’s ad-supported future. Musk’s confrontational stance, including telling advertisers to “fuck yourself,” has implications for the platform’s stability.

2. Subscription Revenue Growth

Despite advertising being the main revenue driver, X experienced a surge in subscription revenue in November 2023. Estimates suggest a record-breaking $6.2 million in net revenue after app store fees. However, this figure is still significantly lower than the in-app subscription revenue generated by platforms like Snapchat, indicating potential growth opportunities for X.

Future Challenges and Opportunities

1. Subscriber Growth Potential

X, with over 500 million monthly active users, has room for substantial subscriber growth. The success of Grok and the appeal of Premium+ features will play a crucial role in attracting more users to the subscription model.

2. Navigating Challenges with Innovation

As X faces challenges in retaining advertisers, the platform’s sustainability may depend on innovating and diversifying revenue streams. Encouraging a larger user base to subscribe to Premium+ is essential for offsetting potential losses in ad revenue.

Conclusion: Grok’s Impact and X’s Evolution

Grok’s introduction marks a significant milestone in X’s journey towards offering enhanced user experiences and redefining its revenue landscape. As Elon Musk navigates the delicate balance between subscriber growth and advertiser relations, the future of X hinges on the success of Grok and the broader appeal of Premium+ features.

The coming months will unveil whether Grok becomes a driving force in X’s subscription revenue or if X needs to explore additional strategies to secure its financial sustainability. One thing is certain — the digital landscape is evolving, and X’s approach to innovation will shape its trajectory in the competitive social media realm.

OpenAI taps former Twitter India head to kickstart in the

OpenAI taps former Twitter India head to kickstart in the country thinkitnow.in

OpenAI is working with former Twitter India head Rishi Jaitly as a senior advisor to facilitate talks with the government about AI policy, TechCrunch has exclusively learned. OpenAI is also looking to set up a local team in India.

People familiar with the matter told TechCrunch that Jaitly has been helping OpenAI navigate the Indian policy and regulatory landscape.

OpenAI currently does not have an official presence in India (apart from a trademark, approved earlier this month). However, OpenAI co-founder and CEO Sam Altman visited New Delhi during his world tour in June and met with Prime Minister Narendra Modi. After his meeting, Altman said he had a great conversation with Modi. Nevertheless, neither Altman nor the company made any announcements during his two-day visit.

It’s not clear if Jaitly is formally employed at OpenAI, but he’s been taking on a role advising the company on how to establish connections in India. He started in the role sometime after Altman’s New Delhi visit, two sources told TechCrunch.

Between 2007 and 2009, Jaitly served as head of the public-private partnership for Google in India before moving to Twitter (now called X) in 2012. He was the company’s first employee in the country, according to his LinkedIn profile.

He was later elevated to VP for the APAC and MENA region. In late 2016, Jaitly left Twitter and became the co-founder and CEO of Times Bridge, the global investment arm of the Indian media giant The Times Group. Times Bridge’s portfolio includes Uber, Airbnb, Coursera, Mubi, Smule and Wattpad. Jaitly left the firm in 2022.

OpenAI and Jaitly did not respond to requests for comment.

OpenAI’s vice president of global affairs, Anna Makanju, is scheduled to speak at the Global Partnership on Artificial Intelligence summit in Delhi next week, alongside other industry experts and international politicians. She will be a part of the session titled “Collaborative AI for Global Partnership (CAIGP) – Global Cooperation for Equitable AI.” Sources told TechCrunch that Jaitly assisted in setting up Makanju’s participation at the event.

Rishi Jaitly. Image Credits: Mobile Global Esports

In recent weeks, OpenAI’s leadership has been on a roller coaster. First, Altman and board president Greg Brockman were abruptly ousted from the company. The duo joined Microsoft for a hot minute before returning to OpenAI with a revamped board.

At an event hosted by Times Group in New Delhi during his June visit, Altman responded to a question about building foundational models with a $10 million budget. It’s “hopeless,” he said. (OpenAI has raised a bit more itself — over $11 billion to date — to build its foundational models.)

His comments met some backlash from Indian entrepreneurs, but Altman clarified later that his words were taken out of context and that he meant it’s hard to compete with the likes of OpenAI with such a budget.

“The right question is what a startup can do that’s never been done before, that will contribute a new thing to the world. I have no doubt Indian startups can and will do that,” he said in a post on X.

Critics have described India as severely lagging behind in the world of AI development, not least because of the lack of funding. This piece in September noted that India’s AI startups have raised around $4 billion, which sounds like a big number until you consider the $50 billion that has been poured into the ecosystem in India’s great rival, China; or the $11 billion+ that OpenAI alone has raised (along with the billions more picked up by other large players, and of course the money Big Tech is putting into this).

A more sympathetic viewpoint might be that India’s AI development is still just nascent, with a few startups such as Sarvam — which recently raised $41 million from investors including Lightspeed, Peak XV, and Khosla Ventures — just getting started on building foundational models.

“While there are over 1,500 AI-based startups in India with over $4 billion of funding, India is still losing the AI innovation battle,” analysts at Sanford C. Bernstein said in a note.

That leaves a big gap for companies like OpenAI. India, the world’s most populous country and the second-biggest internet market after China, with over 880 million users, presents an opportunity for growth. Altman hinted at the company’s interest in the country during his June visit to the engineering college IIIT Delhi.

“It really is amazing to watch what’s happening in India with the embrace of AI — not just OpenAI but other technologies, too,” he said at the time.

That said, the company has yet to disclose any investment in the country (save for the trademark).

And it might not be a fast move. An OpenAI investor told TechCrunch that the company does consider India its key market and is looking to explore opportunities to grow its presence.

But with OpenAI’s leadership locked in, now with a more aligned board behind its bolder commercial push, regulation is really one of the last things in the company’s way. And so working on the regulatory front may be the first and most important efforts it can be making right now.

For now the task may be more about understanding what direction things will be moving in coming years.

Indian government officials have indicated multiple times this year that they are not looking to put strict regulations around AI development. India’s IT Minister of State Rajeev Chandrasekhar has repeatedly pushed for international collaboration to develop a framework on regulating AI, with the “guardrails of safety and trust.”

“We are very committed to AI,” he said at the Global Technology Summit in New Delhi earlier this week, hosted by Carnegie India and India’s external affairs ministry. “We certainly are focused on using AI in real-life use cases and our prime minister is absolutely a believer that technology can transform the lives of people, make governments deliver more, deliver faster, deliver better. And so AI is going to be for us used to build models and build capabilities that are aimed at real-life use cases.”

Unlike OpenAI, its biggest investor and strategic partner Microsoft — which now has an observer seat on the board — has a strong hold in India. The software behemoth, which established its local presence in the Indian market back in 1990, has one of its largest R&D centers outside its Redmond headquarters in Bengaluru and three data centers across the country. It has over 20,000 employees across 10 Indian cities. The company is also an active investor in Indian startups.

We have contacted OpenAI for comment and will update this post when and if it responds.

Report: FAA should improve investigation process after a rocket launch

Report: FAA should improve investigation process after a rocket launch goes awry thinkitnow.in

The U.S. Federal Aviation Administration has let launch providers conduct their own investigations in nearly every instance that a launch mishap has occurred since the start of the century — a practice that needs closer scrutiny, a federal watchdog said in a new report.

The report, published Thursday by the U.S. Government Accountability Office (GAO), takes a close look at the investigations into launch mishaps, the industry term for when a launch ends in an explosion or other failure. Mishap investigations are a normal course of action and are generally under the aegis of the FAA — but this report reveals that the practice is basically entirely operator-led, with the FAA having inadequate resources for in-house investigations.

Of the 49 mishaps that occurred between 2000 and 2023, and for which FAA was the lead investigative authority, all were led by the launch operator, the report found. The one exception was an investigation into a fatal accident involving SpaceShipTwo in October 2014, for which the National Transportation Safety Board was the lead authority.

The FAA lets the launch company lead its own investigations for a number of reasons, officials told GAO. A close understanding of vehicle design and the underlying technology is necessary when undertaking a root cause investigation into a failure, and the operators know their vehicle best. FAA officials also estimated that in-house investigations would take 10-20 times longer than those led by the operator.

The U.S. Government Accountability Office said the FAA should develop criteria for determining when a mishap investigation is led by its office or the launch provider, the report said. It further found that the FAA has not evaluated the effectiveness of its “operator-reliant” mishap investigation process.

“FAA officials told us they make their decisions to authorize operator-led investigations depending on the level of investigation required, which is largely based on severity of the mishap or its consequences and may also take into consideration the level of public interest,” the report said. “However, in practice, FAA authorized the operator involved to lead the investigation of its mishap for all 49 mishaps for which FAA had lead investigative authority.”

Even when investigations are led by the launch company, the FAA still exercises some degree of oversight and involvement in the process. But some stakeholders questioned whether the launch companies “can be impartial or effective investigators of their own mishaps.”

Launch providers told the GAO that they take steps to maintain independence of their internal investigations, and others said that market incentives and insurance requirements can also create positive incentives for a rigorous, credible investigation process.

Regardless, GAO found that the FAA did not maintain criteria for evaluating the effectiveness of operator-led investigations, and has no formal channels for operators to share their findings with the wider industry.

“Without a comprehensive evaluation of the effectiveness of its operator-led mishap investigation process, FAA cannot be assured that its safety oversight is best achieving agency objectives in an area of critical importance,” GAO said.

Fundraising trends for 2024: Get to the point, explain ‘why

Fundraising trends for 2024: Get to the point, explain ‘why now’ thinkitnow.in

Thanksgiving is long behind us, so unless you’re already in due diligence with a VC, you may as well pack up your fundraising knapsack and chill out until the holidays are over.

But this is an opportunity, too. The quiet weeks ahead are the perfect time to polish your pitch deck and perfect your pitch before kicking things back off in January.

According to a new report on the early-stage fundraising trends of 2023 by DocSend, things are pretty bleak for young startups. At the seed stage, founders have had to contact more investors but ended up with fewer meetings, pointing to an increasingly competitive fundraising environment.

The report shows a correlation between the number of investors contacted and both the number of meetings held and the amount of funding raised. Many seed-stage startups in the dataset managed to secure a significant number of meetings, and consequently, raised capital, by reaching out to fewer than 50 investors. In contrast, founders who contacted more than 80 investors were a lot less successful.

There may be some noise in the data, however: AI’s popularity and availability has made it easier for founders to reach out to a lot of VCs (anecdotally, that seems to be what the VCs are observing as well). The best advice? Make sure you know how VC works and what an investment thesis is.

Apple cuts off Beeper Mini's access after launch of service

Apple cuts off Beeper Mini’s access after launch of service that brought iMessage to Android thinkitnow.in

Was it too good to be true? Beeper, the startup that reverse engineered iMessage to bring blue bubble texts to Android users, is experiencing an outage, the company reported via a post on X on Friday. And Apple is to blame, it seems. Users, including those of us at TechCrunch with access to the app, began seeing error messages when trying to send texts via the newly released Beeper Mini and messages are not going through.

The error message reads: “failed to lookup on server: lookup request timed out” spelled out in red letters.

Image Credits: screenshot of Beeper Mini error

In a response to a question on Reddit as to whether or not the app was broken, a Beeper team member had earlier replied “Report a problem from the app, give us a chance to look into it.”

However, Beeper CEO Eric Migicovsky responded to TechCrunch’s inquiry about Beeper Mini’s status by pointing us to the X post acknowledging the outage, and providing more detail. Asked if possibly Apple found a way to cut off Beeper Mini’s ability to function, he replied, “Yes, all data indicates that.”

We don’t know what this means for the future of Beeper Mini’s efforts, unless Beeper’s engineers are able to work around the problem somehow.

Migicovsky, who previously founded the smartwatch Pebble, has argued that Beeper Mini wasn’t just beneficial for Android users who wanted to finally join their iMessage friends’ group chats, but that it increased security for iPhone users, too.

In an interview ahead of Beeper Mini’s launch, the founder explained that green bubble texts were unencrypted.

“That means that anytime you text your Android friends, anyone can read the message. Apple can read the message. Your phone carrier can read the message. Google… literally, it’s just like a postcard. Anyone can read it. So Beeper Mini actually increases the security of iPhones,” he had told TechCrunch.

Apple, on the other hand, sees iMessage as one of the key tools for locking in users to its ecosystem, which is why it won’t launch an iMessage app for Android. While there was some hope that EU regulations would force it to make iMessage more interoperable, news this week indicates that iMessage will get a reprieve from those rules because the service is not popular enough with business users. That means Apple has no reason not to try to shut down Beeper Mini, if it could.

Migicovsky is none too pleased with that turn of events.

“I would be very interested to hear why they think that making security worse for iPhone users makes sense,” he said.

“If it’s Apple, then I think the biggest question is — if Apple truly cares about the privacy and security of their own iPhone users, why would they try to kill a service that enables iPhones to send encrypted chats to Android users? With their announcement of RCS support, it’s clear that Apple knows they have a gaping hole here. Beeper Mini is here today and works great. Why force iPhone users back to sending unencrypted SMS when they chat with friends on Android?,” he asked.

Founded in 2020, Beeper’s team had originally been working on a multi-platform messaging aggregator, which was renamed Beeper Cloud this week as Beeper Mini went to launch. The latter uses new technology that allows Android users to text iMessage users as if they were also texting from an iPhone for just $1.99 per month. That means blue bubbles in the group chat, not green ones. Because the startup was no longer using a middleman — like a Mac server relaying messages, as other iMessage-to-Android apps employ — it would essentially appear to Apple’s servers that Beeper Mini’s messages were coming from a device that runs iMessage natively. It’s unclear, then, how Apple was able to cut off Beeper Mini’s access.

What this means for Beeper Mini’s future is uncertain.

“We’ll evaluate options,” Migicovsky said.

DNA companies should receive the death penalty for getting hacked

DNA companies should receive the death penalty for getting hacked thinkitnow.in

DNA companies should receive the death penalty for getting hacked

Personal data is the new gold. The recent 23andMe data breach is a stark reminder of a chilling reality – our most intimate, personal information might not be as secure as we think. It’s a damning indictment of the sheer negligence of companies that, while profiting from our DNA, are failing to protect it.

The 23andMe breach saw hackers gaining access to a whopping 6.9 million users’ personal information, including family trees, birth years and geographic locations. It brings to the fore a few significant questions: Are companies really doing enough to protect our data? Should we trust them with our most intimate information?

Companies are promising to keep our data safe, but there are a couple of quirks here. Government overreach is certainly a possibility, as the FBI and every policing agency in the world is probably salivating at the thought of getting access to such a huge data set of DNA sequences. It could be a gold mine for every cold case from here to the south pole.

The argument “But if you haven’t done something wrong, you have nothing to worry about!” is only partially applicable, here: The problem is one of consent. My father at one point did a DNA test, and discovered he had a half-brother who is about to turn 80. Cue an incredible amount of family drama when they started digging into the history and unearthed a whole bunch of potentially problematic family history.

The problem isn’t so much that my dad chose to do that, it is that I didn’t consent to being in a database, and that’s where things get sticky. I can envision a definite Black Mirror-esque future, where one family member is curious about their ancestry, gets tested, and two weeks later, the FBI comes knocking on every person’s door who shares 50% DNA with that person because they are wanted for some sort of crime.

The audacity of 23andMe, and companies like it, is astounding. They pitch themselves as guardians of our genetic history, as the gatekeepers of our ancestral pasts and potential medical futures. But when the chips are down and our data is leaked, they hide behind the old “we were not hacked; it was the users’ old passwords” excuse.

This logic is equivalent to a bank saying, “It’s not our fault your money got stolen; you should have had a better lock on your front door.” It’s unacceptable and a gross abdication of responsibility.

Companies that deal with such sensitive data should be held to the highest possible standard. We’re not just talking about credit card numbers or email addresses here. This is our DNA, the very blueprint of our existence. If anything should be considered “sacred” in the digital realm, surely it should be this?

The fact that the stolen data was advertised as a list of people with ancestries that have, in the past, been victims of systemic discrimination, adds another disturbing layer to this debacle. It highlights the potential for such data to be misused in the most nefarious ways, including targeted attacks and discrimination.

The DNA testing industry needs to step up. It must ensure that the security measures in place are not just adequate, but exceptional. They should be leading the charge in cybersecurity, setting an example for all other industries to follow.

This is not just about better passwords or two-factor authentication. This is about a fundamental shift in how these companies view the data they are entrusted with. It’s about recognizing the profound responsibility they have, not just to their customers, but to society at large.

Am I hopeful? Not even a little. I’ve long argued that after the Equifax breach, the company should have received the corporate equivalent of the death penalty. Instead, it was given a $700 million fine. I think that’s laughable. Allowing a breach of such a magnitude to even be possible, never mind actually come to pass? You don’t deserve to continue to be a company. I think that is even truer for companies dealing with our DNA.

It’s time for 23andMe and the DNA testing industry as a whole to realize that they are not just dealing with data. They are dealing with people’s lives, their histories and their futures. It’s time they started treating our data with the respect and care it deserves.