Use your mobile camera full-screen on web-browser using ReactJS instead of mobile application

What is this article about?

I believe the world is better without mobile applications.

  • Users do not need to download applications with its constant updates.
  • Developers can avoid paying a substantial amout of commision to the giant corporations, Apple and Google. That meas the services would have been cheaper without the commision.
  • Developers can save their times learning different programming languages for the same application, web-browser-version and mobile-application applications.

There is alternative to mobile applications. Mobile application can be replaced by the modern web-browsers that fully support HTML5 like Chrome, FireFox and Safari.

Using NextJs, the default framework of ReactJS, I have been building a web-application that runs ERP system, short for Enterprise Resource Planing system. It has accounting module that allows users to take photos of transaction evidence such as contracts and invoices connectable with relevant journal entries. It is an audit-friendly ERP system. I will talk about the ERP system in a different article since the point of this article is not the ERP system. The structure can be better explained by the following diagram.

Most users would take the photos using their private phones. I do not want to have the users download and install any mobile application in their phoes. The users would not want to save work-related documents in their personal phone either. There could be privacy and security issues.

The alternative way of taking photos is to use modern web-browsers that support HTM5 such as Chrome, Firefox or Safari. The images are saved in states temporarily, but do not remain in their phones and more importantly, they do not need to download and install any additional mobile application.

This article is about how to implement taking photos the web-browsers mimicking the way of doing the same task with mobile applications.

The following photo is the final looks on one of the web-browsers I mentioned, Safari. The photo is taken when the red button is pressed. And the away-from-user-view camera and toward-user-view camera are switched on pressing the top-left loop button.

It is full-screened and the quailty of photos is not downgraded. It can work as the complete replacement of mobile application in taking photos.

Gosung Gun, Kangwon-Do, South Korea

The scope of this article

In this article, I would like to cover how to

  1. Take photo on browsers,
  2. Get dataUrl from the image,
  3. Display the video, buttons and image on browser

Therefore, I do not discuss how to send the dataUrl of the photo image to the backend database, which I will do in a different article.

Custom Hook and Component

It is alway good to separate logics from a component as much as possible in ReactJs or NextJs. In that way, not only does the code get neat, but also the component and logic are easily reusable.

I have built a custom hook called useCamera() and a component called SimpleCameraOnMobile.

The hook, useCamera(), takes care of logics such as

  1. Take photo on browsers
  2. Get dataUrl from the image

And the component, SimpleCameraOnMobile takes care of styles mostly

3. Display the video, buttons and image on browser


I have already made the code public in my git repository and npm.

They are all public so that you can easily check out the latest version of the git repository and download/install the npm module.

As of writing this article, it is v1.0.3. It can be modified in the future. I will be very careful in backward compatibility.

useCamera v1.0.3

1) getElementByTagName (line 15–16)

As mentioned, useCamera() takes out logics from the component. As soon as the component is mouted, this hook finds video and canvas elements with “getElementByTagName”. If there are more than one video or cavas, the first one will only be considered.

2) getUserMedia(line 25–27)

Once the elements are found, it starts preparing the user’s camera after his or her permission is granted. The option is set by passing the constraint to getUserMedia().

If everything went well, the video stream is generated. The stream is sent to the video element by “video.srcObject = stream”

3) Locate the canvas underneath the video element (line 28–37)

This is the most essential part in taking photo. How do we capture one frame out of the video stream and make the frame an image?

It is achieved by drawing one video frame on the canvas element. In order to do that, we should place the canvas element underneath the video element.

Locate the canvas element underneath the video element

4) Start on video(line 38)

Once the canvas element is placed underneath the video element, it is ready to play video. let’s turn on the video by “”.

If everything went well up to this point, a user will be seeing the video element playing on his or her mobile. Because I have set facingMode “environment” (line 21), the camera viewing away from user is on.

5) captureImage(line 54)

The function, captureImage(), draws one video frame on the canvas element at the moment the function is executed. Even though this is usally triggered by user’s action such as pressing a button, you, as a developer, can customize it.

6) switchCameraFacingMode(line 50)

The function switches the state cameraFacingMode between “environment” and “user”. The state is initially set “environment” because this hook is meant to be used in mobile phones. But when a user wants to swith his or her camera direction, this function can be executed. Like captureImage this one is also meant to be triggered by pressing button.

This is it for useCamera(). As mentioned, it takes care of only logics, not styles. The beauty of separting the logics from component is that this hook can be used in any customized components. In short, it is reusable.


This is a component that is responsible for how user interface looks like. For example, it decides where the video is placed, how big the video should be, where the buttons are to be located and what color the buttons should be.

Most importantly, you have to decided what to do with the imageUrl returned from useCamera() after captureImage() is triggered. You could send it to a backend database or just display it on <img src=“data:image/png;base64, iVBORw0KGgoAAAANSUhEUgAAAAUAAAAFCAYAAACNbyblA…”/>.

It is you, as a developer, that should build a component on your own taste using useCamera(). I would like to share mine for just reference purposes.


This component displays video on fullscreen. As soon as a photo is taken or image is captured, the dataUrl of each image is saved one by one in the state, imageDatas, which is an array. The imageUrls in the array are displayed by 2 x n below the video element as below.

Here is the QRcode that directs you to the my component-testing webpage. Turn on your mobile camera and face it to the QRcode below to see what happens.

This is it. I hope you enjoyed my article. If you have comments or questions, please leave it. Thank you for reading.



Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store