Start a new topic



I am drawing some images and need to save the file to use it later but when using the command sendxy=1 it just send out the first touch point and not the all line points.

Is it already available and i need to do some extra code or is a develop possibility?



1 person likes this idea

I am now reviewing all of the Feature Requests, this will take some time, patience please.

Request to have sendxy transmit drag points not just start/end - carried forward.

Thank you for your post. It is indeed a very important issue.

After a careful reading of "Your programming logic", the approach of drawing lines will not work at all because the user input will be - always - a continuous line at the screen. No one will draw a polyline with multiple clicks like someone perforating a sheet of paper... It is not intuitive.

With the actual instructions set I cannot see how to achieve this, but if the sendxy has a different behaviour I believe it becomes possible, i.e.,

sendxy Touch Press Event - send coordinates of the touched pixel (already implemented)
sendxy Touch Release Event - send the coordinates of the pixel where user leave touching the screen

And if possible, like in the slider component, a Touch Move sending intermediate coordinates that could be controlled by a timer depending the quantity (precision) of points needed to draw the line.

If this new functions could be implemented it will be a great improvement for the Nextion (just a thought).

All the best,


ok thanks for the honesty.


You're welcome.  Two weeks ago, I was just a user like everyone else. 

Now I am hoping my new role will allow better communications.

It is something I expected as a user, something I hope to deliver as staff.

thank you


As well as sendxy just sending the first touch points cords "Pressed" report, it also sends the original first touch point cords "Released" report (so a press can be matched to a release to say a touch is done)

First, I have not seen any functions capable of capturing a screen drawing.

Next, If you could somehow hook the same function drawing the pixel to interrupt to send that coord to serial, dataloss occurs quickly if an interrupt is lost to many pixels being draw/recorded back to back.

Next, to capture a drawing through sendxy coords in real-time would push about 18 bytes to the serial for each/every pixel pressed/released.  It wouldn't take long before data loss occurs (either the pixel is not captured because the serial (frame) buffer is full, or the (circular) buffer wraps and overwrites other data before it could be sent out over the wire).

Next, there is no function to read a DisplayPixelValue in the displays GRAM exposed.

Next, it can not be stored in 8K of SRAM, (240x320 is 172,800 bytes, 800x480 is 864,000 bytes)

Next, there is no file system defined by the firmware to say save it to Flash

Next, if it could be saved to flash, it is certainly a longer download over the serial Tx/Rx line (min 15 sec at 115200 baud for 240x320 and min 75 sec at 115200 baud for 800x840)  If for any reason you couldn't maintain the highest 115200 baud speed, then double the times for next highest 57600 baud.

Interesting thought though.

Hello Patrick,

It will be possible to implement that feature or not?
There is a new release from 3,2 inch that already support RTC, 8 GPIO, flash data save so maybe that feature could also be implemented...
The problem now is that i am stuck in my prototype just because of the drawing and save part.
Please answer me more objective so i could understand what options do i have and if i can keep with this display or if i need to change to the new one or another completely different.
Or perhaps you could suggest me another option.


Hi Timoteo

First, I need to clarify that I am not Itead staff.  I was offering approaches and factors that need consideration that may possibly help you achieve your result with the existing standard version displays.  Understanding that as a programmer, finding ways and methods to achieve a goal and push the hardware to its limits is what we do.

As you pointed out, since your original request 14 days ago, we have seen there is a new enhanced version coming that can be pre-ordered now.  It definitely looks like it will have more capabilities, though I thought I saw 1024 bytes of space to save to, and that still hasn't addressed your feature request of being able to save to the flash chip.

May I recommend you open a support ticket. Where itead also provides custom pcb work, I have to assume there would be many options available (that are not limitations in the standard edition) for you to realize your goal.  I would recommend that you discuss your specific requirements with them in a much more direct approach.

Hello Patrick,

I do understand what you say my question was if it is possible to be implemented in the firmware.
I know it can take a very long time to refresh, but for the first approach it could be interesting to have the possibility the display send all the stylus track over serial and than i could just use it to save to an sd card.



I will request a little less condescension. At least I offered an approach that could function to an undescribed ambiguous goal, that may achieve a result while waiting for a request to be considered.  The difficulty I see is providing common ground across the multiple devices.  Where the larger versions have their graphics driver on the pcb is quite different than the smaller versions where the driver is embedded in the display.  The other difficulty I see is limited capacity of the pcb mcu.  I assume one could take the approach that the mcu be filled with software to the point there is no room to process the user designed HMI, but I would probably take the approach of only putting the necessary firmware on the mcu and not what can be user coded and driven from the user-mcu that is connecting to the display. 

I think the resulting issue you will get if the function could be implemented becomes the delay between the time the HMI user has drawn the line, and when you can get the limited amount of data that the STM mcu was able to store in the buffer returned to your connecting mcu.  On the smaller displays, the inaccessible ILI9341 graphics driver may record press when contact connections are made, but does it have the circuitry to even track movement, or does it merely record position where contact connection is lost.  This would coincide with the touch/release events that we have in the Editor.  I will further that with the interrupt capabilities of the STM32F0 mcu where the single interrupt can have a second "waiting interrupt" recorded if the linear programming of the second interrupt wont interfere with the stack pointer, otherwise is generally discarded or lost, there certainly isn't provisions for a third fourth and fifth -- and that is why when you are dragging your stylus is draw mode, you get skips. 

However what is occurring in draw mode, as the behaviour shows is: that sendxy on contact is sent to the outgoing buffer and is then suspended to avoid a slew of instructions to record to the outbound buffer, and then the series of the 90-bit high-lows to send it over a N81 serial connection for each of the points captured and not lost to the cascading interrupt limitation described above.  After the drag has been completed, and the connecting contact has been lost, the final point is sent to the buffer and sendxy resumes. 

One point to consider in adding this extra series of commands is can the users-mcu connected over serial rx/tx lines even capture all of additional data this request is asking for as fast as the user is drawing it on the screen.  What limitations do you impose on your user via software?  What interval ms are you going implement where your mcu side software knows there are points missing and implements intuitively filling them in the missing points?

I will refer back to my point of common ground needed across all devices.  I will assume with a 108MHz chip being put on the Enhanced larger versions that it may indeed be possible in the future on a more capable device.  I may even go so far to jump at an assumption it will be a function of Enhanced and sadly may not be included in the Standard devices.  I will point at the fact that "intuitive" is actually achieved on your-mcu that connects to the nextion display, achieved via your software programming -- and obviously a RaspberryPi or Intel Edison will have greater capabilities to achieve an intuition compared to an Arduino Nano or an Atmel Tiny85.

And finally re-reviewing "My Programming Logic" it may not be as far out to lunch as one thinks.


There is nothing wrong with your request.  I am going to clarify my point that I was iterating through the potential barriers and approaching some possible ways one could begin to think at finding a solution that functions.  I am going to relate to a "This doesn't work" question.  Without more details, can you show what code you have, where is your HMI to take a look, what is the environment (Arduino, Intel, Other, what libraries, whose libraries) -- makes it hard to describe a one fits everything solution.

From the beginning of time, hardware circuitry only takes things to a point, and then software kicks in to create virtual circuitry to accomplish the tasks.  I have been digging deep into the ILI9341 drivers (hey I don't even know which nextion display you have - it might have slightly different drivers and hardware) and its instruction set, coupled with the ST32F0 version of the Cortex M0 thumb 16 instruction set, coupled with nextion variables and nextion's instruction set, reviewing the binary output of the tft files to locate how you might be able to accomplish your goal.  I have not completed my work, but some thoughts on an approach come to mind. 

What if you were to include a timer set at the 50ms (minimum value for a timer) in your initialization and touch/release logic in your HMI, you maybe able to kick your touch release.  I haven't personally tried it yet.  You might have to store your current drawing variables into user-defined variables, brush color, etc, and then back to back, turn drawing to off, reinitialize the draw variables from your user-defined stored variables, reinstate drawing to on -- where your timer is reinitialized in the touch pressed logic,  you might actually be able to simulate your effect.  I'll point out that 50 ms is a max 20 times per second, and the logic may steal a few in reality, but you might be able to automate and realize 15 points per second. 

The understanding of course is that you understand your user could outdraw that in a quick drag, and that it is dependent on the display driver interpreting the "press already in progress" when switched on as a new press event.  If the driver requires its voltage is changed from a press/release on the actual screen, this approach will of course fail.

Does it mean that it will be implemented?
Thanks a lot.


Being honest, It means it is on the list of things to be prioritized in next round,

(as opposed to not making the cut at all)

There are a couple of requests that are really cool

 - Custom user components that can be shared

 - Transparency support with 1555

 - Many Coding improvements

 - Font issues and variable width

 - x,y touch cords as variables

 - SD access

So, it becomes which ones first - List is about 147 items across 18 categories

Sharable Custom user components ... tough one to top. I think it will be most popular

Ok, thank you.
I will try that approach.
Do you know where i can open a support ticket?


Login or Signup to post a comment