HomeBlogFuture of videoMedia releasesAI video tools plans for UGC platform Vloggi revealed (Media release)

AI video tools plans for UGC platform Vloggi revealed (Media release)

MEDIA RELEASE – Video testimonial software Vloggi reveals new AI-powered tools for true automation of video production as part of equity crowdfunding raise

 

    • Off-the-shelf video intelligence APIs will add machine learning and object detection to power automated categorisation of video for platforms users

    • Automated video production workflows will build on 2020 prototype

    • New tools combine crowdsourced video with automation to produce video 1000x faster and 100x cheaper than any available method today

 

Sydney – 1 June 2023  – Leading user-generated video platform Vloggi has today revealed the artificial intelligence technologies that will be incorporated into its next two versions of the popular software platform. The first stage will be to massively advance the categorisation and tagging of the user-generated video clips in users’ libraries. The second stage, slated for later this year, will advance the automated video production workflows pioneered by the company prior to the COVID-19 pandemic in 2020.

Speaking to media in the company’s Australian headquarters in Sydney, Vloggi CEO and founder Justin Wastnage said the new technology upgrades were the culmination of the long-held vision for the company to fully automate video production.

“When I founded the company in late 2018, I had the dream of fully automating video production by combining user-generated video with data-led workflows. The technology wasn’t available then, so we set about building our own. Today I’m pleased to say that a lot of what we need to build is available off-the-shelf”
Justin Wastnage, Founder & CEO Vloggi

The integration of video intelligence APIs into Vloggi will comprise three phases:

Phase 1 is the deployment of components that have been engineered over the past few months

  • Automatic enhancement and rescaling of user-generated video and audio inputs to international broadcast standards, making Vloggi the first video collection platform to produce broadcast-quality video from UGC video
  • Extraction of mobile phone video metadata to judge depth of field, orientation and color density
  • User-defined inputs to classify and catalog video assets submitted by end-users
  • Extraction of speech and audio tracks from videos clips for use in captioning

 

Speech recognition in Vloggi

Stage 2 of the AI integration will see the following additional features added to extract greater meaning from videos:

  • Object recognition from video files submitted
  • Facial recognition of speaker sentiment during video testimonials
  • Audio analysis for situational sound stamps
  • Analysis of speakers’ narration to gauge sentiment
  • Automatic explicit content moderation
  • Integration into external databases (item lookup by SKU)

 

AI video tools including facial recognition will debut in Vloggi 4.0

AI video tools including facial recognition will debut in Vloggi 4.0

 

Stage 3 will focus on refining the proprietary automated workflows that already exist in the platform through the use of data and will include:

  • Automated quality assessment of submitted videos
  • Automated trimming or video clips to remove hands
  • Sequenced workflows and templated video production
  • Conditional workflows that produce videos based on a set list of criteria

The implementation of these technologies will take place in quarter 3 and 4 of 2023, subject to Vloggi completing its current seed round of funding.

“Providing a clear road map of technology to investors was really important to us. We took a pragmatic view of delivering the large scale video cataloging and indexing requirement our corporate customers now require as they build their own product video libraries. We balanced the proprietary systems we have been developing over the past four years against those AI video tools now readily available. We have decided to focus our research & development efforts on perfecting automated video workflow in different verticals.”
Justin Wastnage, founder & CEO Vloggi

Invest in AI video tools with Vloggi via OnMarket

Your opportunity to become a shareholder in Vloggi and invest in the future of AI-powered video automation (always read the CSF risk warning and offer doc)

Vloggi was a pioneer of AI video tools

Vloggi was founded on the concept of automating video production. Release 2.0 of the platform had a Magic Minute feature that would automatically make a 6-second highlight reel from a folder of video clips.

Vloggi 2.0 uses AI to automatically make 1-minute video compilation

Vloggi 2.0 featured Magic Minute, a proprietary algorithm that uses AI to automatically make 1-minute highlight reels

The algorithm for the Magic Minute feature, developed by Wastnage, was trained primarily against tourism promotion videos and picked the best clips based on a 12-point data analysis workflow. In the wake of COVID-19, the feature was abandoned due to a shift in focus in the company away from travel and tourism as a sector during the pandemic.

AI video creation using timestamps

Vloggi invented the video diary blog in 2020

Vloggi invented the video diary blog during the first COVID lockdown as a way of sequencing timestamped video clips from the same contributor

In 2020 the company created workflows based around the timestamps contained within the metadata of mobile phone footage. During isolation, video diaries were created that extracted the time and location of each entry and automatically sequenced the clips together into a diary format.

This time-based AI video logic was later repurposed to create business reporting videos of Before-During-After. Primarily enabling employees to submit videos from their phones and Vloggi’s systems would automatically sort by time, apply job details, location and other formatting and produce a video report to be kept on file or sent to the client.

Vloggi develops video reporting tools for businesses, using timestamps from employee phone footage to sequence.

Vloggi develops video reporting tools for businesses, using timestamps from employee phone footage to sequence.

During 2020, Vloggi built over 50 sequenced video automations for different business sectors. Another example is Salon Selfie, designed for hair salons. Their clients upload video clips through the treatment and are then sequenced by time and presented with a fully-formatted souvenir of their visit. The hairdresser can use the AI-sequenced video story in their marketing, with customer permissions, instead of repurposing content for social media.

Salon Selfie allows hair salons to automatically compile video for social media from customer footage

Salon Selfie is an automated video workflow developed by Vloggi for hair salons to automatically compile video from customer footage

Having proved the automation flows for small business, Vloggi’s strategic plan saw it move towards enterprise clients. For the past two years, Vloggi has steadily been building a client base among corporates for the mass collection of community and customer footage. Clients today include Amazon, PayPal, Qantas and MYOB.

“Even the best AI video generator needs a feedstock of footage. Before companies can effectively use generative AI in videos, they will need vast libraries of footage. Clearly customer footage is the cheapest and most authentic form of footage for product video in-situ. we believe that combining our video collection technology with our AI video tools will turbocharge these generative AI video tools and produce video at a scale never before seen. We are closer to achieving our goal of automating video for the first time.”
Justin Wastnage, CEO & founder, Vloggi

Read more: Before companies can harness generative AI videos, they will need customer-generated clips as inputs

An example of “When I founded the company in 2018, I build a prototype that used the metadata from video clips to sort and sequence together finished video stories. Yet the machine learning technology for videos was nascent back then and we attempted to build our own models. Today’s off-the-shelf technologies have had such a quantum leap forward, that the hurdle of training our own models has now evaporated,” said Wastnage.

“I remember in 2019 we trained a model against a video of Marina Bay Sands, a mall in Singapore and it was very good at picking up people, but couldn’t provide much analysis on the intent. Today’s off-the-shelf AI video tools pick out brands, shopping, ethnicity and hundreds of other data sets that would have taken us years to train ourselves,” said Wastnage.

Marina Bay Sands Mall in Singapore overlaid with machine learning object detection from Vloggi 2.0

Marina Bay Sands Mall in Singapore overlaid with machine learning object detection from Vloggi 2.0

 

Similarly, a user-generated video of the Patrouille de France flying over the Louvre in Paris, the AI detected birds at first, before it saw them as jets. The iconic Pyramide was not picked up, which it would be now, said Wastnage. Although we are a some way off the free AI generator tools being able to conjure up such a complex scene, the Louvre in Paris has sufficient source footage for text-to-video AI generators to be able to generate such a scene. However, companies face the issue of not having enough source footage of their product until they add crowdsourcing user-generated content into the generative AI video workflows, Wastnage added.

Vloggi machine learning image recognition tracks the Patrouille de France flying over the Louvre

Vloggi machine learning image recognition tracks the Patrouille de France flying over the Louvre

The AI video tool integration will be completed by the end of the year, subject to capital raise funding. The company completed R&D on its proprietary automated video workflows after a previous capital raise that closed in 2021 at its target of A$750,000. It is now raising a further A$750,000 to commercialise its technology in the US and to integrate AI video intelligence APIs into its existing product. You can register to become a shareholder here: onmarket.com.au/offers/vloggi-eoi

—-ends——

About Vloggi

Founded in 2018, Sydney tech startup Vloggi is the world’s first collaborative video production platform using AI to process, sort and annotate user-generated video for use by businesses and social groups. It is a registered trademark of parent company, tech investment vehicle Ciné Souk, founded in 2017.

The company was founded by Justin Wastnage, a former director of the Tourism & Transport Forum, and founder of the Flight TV aviation video channel. Wastnage started his career with interactive digital television pioneer Static 2358 (later PlayJam) before creating video content for Microsoft. The company’s technical team is headed up by co-founder and CTO, Jeremy Giraudet who developed podcasting distribution and storytelling software for global technology companies.

Vloggi is based in the Fishburners coworking space within the Sydney Startup Hub (SSH).

Media enquiries

David Binning, Brand Comms Bureau
[email protected] / +61 406 397 033



Leave a Reply

Your email address will not be published. Required fields are marked *

MEDIA RELEASE – Video testimonial software Vloggi reveals new AI-powered tools for true automation of video production as part of equity crowdfunding raise

 

    • Off-the-shelf video intelligence APIs will add machine learning and object detection to power automated categorisation of video for platforms users

    • Automated video production workflows will build on 2020 prototype

    • New tools combine crowdsourced video with automation to produce video 1000x faster and 100x cheaper than any available method today

 

Sydney – 1 June 2023  – Leading user-generated video platform Vloggi has today revealed the artificial intelligence technologies that will be incorporated into its next two versions of the popular software platform. The first stage will be to massively advance the categorisation and tagging of the user-generated video clips in users’ libraries. The second stage, slated for later this year, will advance the automated video production workflows pioneered by the company prior to the COVID-19 pandemic in 2020.

Speaking to media in the company’s Australian headquarters in Sydney, Vloggi CEO and founder Justin Wastnage said the new technology upgrades were the culmination of the long-held vision for the company to fully automate video production.

“When I founded the company in late 2018, I had the dream of fully automating video production by combining user-generated video with data-led workflows. The technology wasn’t available then, so we set about building our own. Today I’m pleased to say that a lot of what we need to build is available off-the-shelf”
Justin Wastnage, Founder & CEO Vloggi

The integration of video intelligence APIs into Vloggi will comprise three phases:

Phase 1 is the deployment of components that have been engineered over the past few months

  • Automatic enhancement and rescaling of user-generated video and audio inputs to international broadcast standards, making Vloggi the first video collection platform to produce broadcast-quality video from UGC video
  • Extraction of mobile phone video metadata to judge depth of field, orientation and color density
  • User-defined inputs to classify and catalog video assets submitted by end-users
  • Extraction of speech and audio tracks from videos clips for use in captioning

 

Speech recognition in Vloggi

Stage 2 of the AI integration will see the following additional features added to extract greater meaning from videos:

  • Object recognition from video files submitted
  • Facial recognition of speaker sentiment during video testimonials
  • Audio analysis for situational sound stamps
  • Analysis of speakers’ narration to gauge sentiment
  • Automatic explicit content moderation
  • Integration into external databases (item lookup by SKU)

 

AI video tools including facial recognition will debut in Vloggi 4.0

AI video tools including facial recognition will debut in Vloggi 4.0

 

Stage 3 will focus on refining the proprietary automated workflows that already exist in the platform through the use of data and will include:

  • Automated quality assessment of submitted videos
  • Automated trimming or video clips to remove hands
  • Sequenced workflows and templated video production
  • Conditional workflows that produce videos based on a set list of criteria

The implementation of these technologies will take place in quarter 3 and 4 of 2023, subject to Vloggi completing its current seed round of funding.

“Providing a clear road map of technology to investors was really important to us. We took a pragmatic view of delivering the large scale video cataloging and indexing requirement our corporate customers now require as they build their own product video libraries. We balanced the proprietary systems we have been developing over the past four years against those AI video tools now readily available. We have decided to focus our research & development efforts on perfecting automated video workflow in different verticals.”
Justin Wastnage, founder & CEO Vloggi

Invest in AI video tools with Vloggi via OnMarket

Your opportunity to become a shareholder in Vloggi and invest in the future of AI-powered video automation (always read the CSF risk warning and offer doc)

Vloggi was a pioneer of AI video tools

Vloggi was founded on the concept of automating video production. Release 2.0 of the platform had a Magic Minute feature that would automatically make a 6-second highlight reel from a folder of video clips.

Vloggi 2.0 uses AI to automatically make 1-minute video compilation

Vloggi 2.0 featured Magic Minute, a proprietary algorithm that uses AI to automatically make 1-minute highlight reels

The algorithm for the Magic Minute feature, developed by Wastnage, was trained primarily against tourism promotion videos and picked the best clips based on a 12-point data analysis workflow. In the wake of COVID-19, the feature was abandoned due to a shift in focus in the company away from travel and tourism as a sector during the pandemic.

AI video creation using timestamps

Vloggi invented the video diary blog in 2020

Vloggi invented the video diary blog during the first COVID lockdown as a way of sequencing timestamped video clips from the same contributor

In 2020 the company created workflows based around the timestamps contained within the metadata of mobile phone footage. During isolation, video diaries were created that extracted the time and location of each entry and automatically sequenced the clips together into a diary format.

This time-based AI video logic was later repurposed to create business reporting videos of Before-During-After. Primarily enabling employees to submit videos from their phones and Vloggi’s systems would automatically sort by time, apply job details, location and other formatting and produce a video report to be kept on file or sent to the client.

Vloggi develops video reporting tools for businesses, using timestamps from employee phone footage to sequence.

Vloggi develops video reporting tools for businesses, using timestamps from employee phone footage to sequence.

During 2020, Vloggi built over 50 sequenced video automations for different business sectors. Another example is Salon Selfie, designed for hair salons. Their clients upload video clips through the treatment and are then sequenced by time and presented with a fully-formatted souvenir of their visit. The hairdresser can use the AI-sequenced video story in their marketing, with customer permissions, instead of repurposing content for social media.

Salon Selfie allows hair salons to automatically compile video for social media from customer footage

Salon Selfie is an automated video workflow developed by Vloggi for hair salons to automatically compile video from customer footage

Having proved the automation flows for small business, Vloggi’s strategic plan saw it move towards enterprise clients. For the past two years, Vloggi has steadily been building a client base among corporates for the mass collection of community and customer footage. Clients today include Amazon, PayPal, Qantas and MYOB.

“Even the best AI video generator needs a feedstock of footage. Before companies can effectively use generative AI in videos, they will need vast libraries of footage. Clearly customer footage is the cheapest and most authentic form of footage for product video in-situ. we believe that combining our video collection technology with our AI video tools will turbocharge these generative AI video tools and produce video at a scale never before seen. We are closer to achieving our goal of automating video for the first time.”
Justin Wastnage, CEO & founder, Vloggi

Read more: Before companies can harness generative AI videos, they will need customer-generated clips as inputs

An example of “When I founded the company in 2018, I build a prototype that used the metadata from video clips to sort and sequence together finished video stories. Yet the machine learning technology for videos was nascent back then and we attempted to build our own models. Today’s off-the-shelf technologies have had such a quantum leap forward, that the hurdle of training our own models has now evaporated,” said Wastnage.

“I remember in 2019 we trained a model against a video of Marina Bay Sands, a mall in Singapore and it was very good at picking up people, but couldn’t provide much analysis on the intent. Today’s off-the-shelf AI video tools pick out brands, shopping, ethnicity and hundreds of other data sets that would have taken us years to train ourselves,” said Wastnage.

Marina Bay Sands Mall in Singapore overlaid with machine learning object detection from Vloggi 2.0

Marina Bay Sands Mall in Singapore overlaid with machine learning object detection from Vloggi 2.0

 

Similarly, a user-generated video of the Patrouille de France flying over the Louvre in Paris, the AI detected birds at first, before it saw them as jets. The iconic Pyramide was not picked up, which it would be now, said Wastnage. Although we are a some way off the free AI generator tools being able to conjure up such a complex scene, the Louvre in Paris has sufficient source footage for text-to-video AI generators to be able to generate such a scene. However, companies face the issue of not having enough source footage of their product until they add crowdsourcing user-generated content into the generative AI video workflows, Wastnage added.

Vloggi machine learning image recognition tracks the Patrouille de France flying over the Louvre

Vloggi machine learning image recognition tracks the Patrouille de France flying over the Louvre

The AI video tool integration will be completed by the end of the year, subject to capital raise funding. The company completed R&D on its proprietary automated video workflows after a previous capital raise that closed in 2021 at its target of A$750,000. It is now raising a further A$750,000 to commercialise its technology in the US and to integrate AI video intelligence APIs into its existing product. You can register to become a shareholder here: onmarket.com.au/offers/vloggi-eoi

—-ends——

About Vloggi

Founded in 2018, Sydney tech startup Vloggi is the world’s first collaborative video production platform using AI to process, sort and annotate user-generated video for use by businesses and social groups. It is a registered trademark of parent company, tech investment vehicle Ciné Souk, founded in 2017.

The company was founded by Justin Wastnage, a former director of the Tourism & Transport Forum, and founder of the Flight TV aviation video channel. Wastnage started his career with interactive digital television pioneer Static 2358 (later PlayJam) before creating video content for Microsoft. The company’s technical team is headed up by co-founder and CTO, Jeremy Giraudet who developed podcasting distribution and storytelling software for global technology companies.

Vloggi is based in the Fishburners coworking space within the Sydney Startup Hub (SSH).

Media enquiries

David Binning, Brand Comms Bureau
[email protected] / +61 406 397 033



Leave a Reply

Your email address will not be published. Required fields are marked *