Acta Scientific Computer Sciences

Research Article Volume 4 Issue 4

Challenges of Content Production Using Large-scale Artificial Intelligence Tools

Nima Karimi1 and Morteza Borhani2*

1Islamic Azad University, Central Tehran Branch, Iran
2Department of Management, Islamic Azad University, Central Tehran Branch, Iran

*Corresponding Author: Morteza Borhani, Department of Management, Islamic Azad University, Central Tehran Branch, Iran.

Received: December 19, 2021; Published: March 22, 2022

Abstract

Content production with artificial intelligence in the not-too-distant future can reach good maturity and meet the needs of small and medium-sized businesses. Broadcasters, as the largest content producer and publisher in the country, as well as other creative content producers, need to make arrangements, especially in the areas of vision, organizational culture, processes, staff capabilities, and their systems and tools. Know the production strategy by artificial intelligence, which is discussed in this article.


Keywords: Content Production; Artificial Intelligence; Broadcasters; Deepfake; Neural Networks

References

  1. “Automated Insights: Natural Language Generation” (2021).
  2. “Becoming an AI-Fueled Organization”. Deloitte Insights (2021).
  3. “Blade Runner 2049”. IMDb (2017).
  4. Brown Tom B., et al. “Language Models Are Few-Shot Learners”. Advances in Neural Information Processing Systems, 2020-December, Neural information processing systems foundation, May (2020).
  5. Brunner Gino., et al. “MiDI-VAE: Modeling Dynamics and Instrumentation of Music with Applications to Style Transfer”. Proceedings of the 19th International Society for Music Information Retrieval Conference, ISMIR 2018, International Society for Music Information Retrieval, (2018): 747-754.
  6. “Cinelytic”. Built for a Better Film Business (2021).
  7. “Consumer Prediction Infrastructure”. Faraday AI (2021).
  8. “DALL·E: Creating Images from Text” (2021).
  9. “DeepSpeed” (2021).
  10. Donahue Chris., et al. “LakhNES: Improving Multi-Instrumental Music Generation with Cross-Domain Pre-Training”. Proceedings of the 20th International Society for Music Information Retrieval Conference, ISMIR 2019, International Society for Music Information Retrieval, (2019): 685-692.
  11. Engel Jesse., et al. “Gansynth: Adversarial Neural Audio Synthesis”. 7th International Conference on Learning Representations, ICLR 2019, International Conference on Learning Representations, ICLR (2019).
  12. “Furious 7” (2015).
  13. “Jukebox” (2021).
  14. June 2021. “TOP500”. (2021).
  15. Li Conglong., et al. “1-Bit LAMB: Communication Efficient Large-Scale Large-Batch Training with LAMB’s Convergence Speed”. (2021).
  16. Li Conglong., et al. “Curriculum Learning: A Regularization Method for Efficient and Stable Billion-Scale GPT Model Pre-Training”. (2021).
  17. Mehri Soroush., et al. “Samplernn: An Unconditional End-to-End Neural Audio Generation Model”. 5th International Conference on Learning Representations, ICLR 2017 - Conference Track Proceedings, International Conference on Learning Representations, ICLR (2017).
  18. “Microsoft OpenAI Computer Is World’s 5th Most Powerful” (2021).
  19. “MuseNet” (2021).
  20. “Neural Network Enabled Filmmaking - Our Product”. Flawless (2021).
  21. “New Technology Reshapes China’s Film Industry”. SHINE News (2021).
  22. “ONNX”. Home (2021).
  23. Oord Aaron van den., et al. “WaveNet: A Generative Model for Raw Audio” (2016).
  24. “OpenAI” (2021).
  25. “OpenAI Five Defeats Dota 2 World Champions” (2021).
  26. “Parasite”. (2019) – IMDb (2021).
  27. “PilotAI: Accelerating the AI Age” (2021).
  28. Rajbhandari Samyam., et al. “ZeRO-Infinity: Breaking the GPU Memory Wall for Extreme Scale Deep Learning” (2021).
  29. Rajbhandari Samyam., et al. “Zero: Memory Optimizations toward Training Trillion Parameter Models”. International Conference for High Performance Computing, Networking, Storage and Analysis, SC, vol. 2020-November, IEEE Computer Society, Nov. (2020).
  30. Rasley Jeff., et al. “DeepSpeed: System Optimizations Enable Training Deep Learning Models with over 100 Billion Parameters”. Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, Association for Computing Machinery, Aug. (2020): 3505-3506.
  31. Ren Jie., et al. “ZeRO-Offload: Democratizing Billion-Scale Model Training”. 2021 USENIX Annual Technical Conference, USENIX Association (2021): 551-64.
  32. “RivetAI” (2021).
  33. “Rogue One: A Star Wars Story” (2016) – IMDb (2021).
  34. “ScriptBook - Hard Science”. Better Content (2021).
  35. “Sonantic” (2021).
  36. Tang Hanlin., et al. “1-Bit Adam: Communication Efficient Large-Scale Training with Adam’s Convergence Speed” (2021).
  37. “The Irishman”. Netflix Official Site (2021).
  38. van den Oord Aaron., et al. “Parallel WaveNet: Fast High-Fidelity Speech Synthesis”. 35th International Conference on Machine Learning, ICML 2018, vol. 9, International Machine Learning Society (IMLS) (2018): 6270-6278.
  39. Vasquez Sean and Mike Lewis. “MelNet: A Generative Model for Audio in the Frequency Domain” (2019).
  40. “Vault” (2021).
  41. “Westworld (TV Series 2016-)”. IMDb (2021).
  42. Yang Li Chia., et al. “Midinet: A Convolutional Generative Adversarial Network for Symbolic-Domain Music Generation”. Proceedings of the 18th International Society for Music Information Retrieval Conference, ISMIR 2017, International Society for Music Information Retrieval (2017): 324-331.
  43. Zhang Minjia and Yuxiong He. “Accelerating Training of Transformer-Based Language Models with Progressive Layer Dropping”. Advances in Neural Information Processing Systems, vol. 2020-December, Neural information processing systems foundation (2020).

Citation

Citation: Nima Karimi and Morteza Borhani. “Challenges of Content Production Using Large-scale Artificial Intelligence Tools". Acta Scientific Computer Sciences 4.4 (2022): 56-63.

Copyright

Copyright: © 2022 Nima Karimi and Morteza Borhani. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.




Metrics

Acceptance rate35%
Acceptance to publication20-30 days

Indexed In




News and Events


  • Certification for Review
    Acta Scientific certifies the Editors/reviewers for their review done towards the assigned articles of the respective journals.
  • Submission Timeline for Upcoming Issue
    The last date for submission of articles for regular Issues is April 30th, 2024.
  • Publication Certificate
    Authors will be issued a "Publication Certificate" as a mark of appreciation for publishing their work.
  • Best Article of the Issue
    The Editors will elect one Best Article after each issue release. The authors of this article will be provided with a certificate of "Best Article of the Issue".
  • Welcoming Article Submission
    Acta Scientific delightfully welcomes active researchers for submission of articles towards the upcoming issue of respective journals.

Contact US