Debugging some high memory consumption showed something that should have been obvious for a long time:
The more you break up views into little pieces and do
renderView/let()
within (i.e. deeply nested) otherrenderView/let()
calls, the more expensive each render becomes.
This is because Lucee/Java is storing the result of each call in a variable. The content of this is then saved in the parent renderView variable and so on.
Doing <cfinclude template="some.cfm" />
does not suffer this problem as the output is directly buffered and there is no intermediate variables. However, renderView()
has many advantages over cfinclude
- particularly its ability to be entirely encapsulated and free from leaking variables into the parent view + also the ability to define views in an extension/core that can be extended by your application.
Proposed solution
Where you are directly outputting a view or viewlet, having new outputView()
and outputViewlet()
helper methods where we directly buffer the output rather than returning a string. So:
<div class="wrapper">
#outputView( view="mymicroview", args=args )#
</div>
For viewlets, this becomes a little more tricky. Especially where they are used in extensions where there is an intent to allow them to be extended. For this I am proposing that we have a new viewlet pattern:
outputViewlet →
- call the handler if there is one
- if the handler returns a string, echo it out
- if the handler returns null, then go ahead and output the associated view using outputView()
For this to work well, a viewlet using this pattern will have access to helper methods in the event scope + a new argument to know whether or not it is being called using renderViewlet()
or outputViewlet()
should it need to offer backward compatibility, e.g.
private function myViewlet( event, rc, prc, args={}, bufferedViewlet=false ) {
// ...
if ( somethingSpecial ) {
if ( arguments.bufferedViewlet ) {
event.deferViewlet( "new.viewlet.to.render" );
return;
}
return renderViewlet( event="new.viewlet.to.render", args=args );
}
// ...
if ( arguments.bufferedViewlet ) {
event.setViewletView( "/non/default/view/for/viewlet" );
event.setViewletArgs( { test=true } );
return; // the view output will be handled by the system, not your viewlet
}
return renderView( view="/non/default/view/for/viewlet", args={ test=true } );
}
A viewlet does not have to care about backward compatibility. i.e. if it is a brand new one, the example above could just be as follows so long as understood that it will not support renderViewlet()
:
private function myViewlet( event, rc, prc, args={}, bufferedViewlet=false ) {
// ...
if ( somethingSpecial ) {
event.deferViewlet( "new.viewlet.to.render" );
return;
}
// ...
event.setViewletView( "/non/default/view/for/viewlet" );
event.setViewletArgs( { test=true } );
}
renderView() + renderViewlet() still necessary
Direct output buffering is not always an option. For example, where you need to first render a view/let and then check to see whether or not it has content:
<cfscript>
students = renderViewlet( event="students", args=args );
</cfscript>
<cfif Len( Trim( students ) )>
<cfoutput>
<h2>Students</h2>
#students#
</cfoutput>
</cfif>
But the new options should give some strategies for extension providers particularly to optimise their output.
Any feedback?